Purpose:Many science,technology and innovation(STI)resources are attached with several different labels.To assign automatically the resulting labels to an interested instance,many approaches with good performance on t...Purpose:Many science,technology and innovation(STI)resources are attached with several different labels.To assign automatically the resulting labels to an interested instance,many approaches with good performance on the benchmark datasets have been proposed for multi-label classification task in the literature.Furthermore,several open-source tools implementing these approaches have also been developed.However,the characteristics of real-world multi-label patent and publication datasets are not completely in line with those of benchmark ones.Therefore,the main purpose of this paper is to evaluate comprehensively seven multi-label classification methods on real-world datasets.Research limitations:Three real-world datasets differ in the following aspects:statement,data quality,and purposes.Additionally,open-source tools designed for multi-label classification also have intrinsic differences in their approaches for data processing and feature selection,which in turn impacts the performance of a multi-label classification approach.In the near future,we will enhance experimental precision and reinforce the validity of conclusions by employing more rigorous control over variables through introducing expanded parameter settings.Practical implications:The observed Macro F1 and Micro F1 scores on real-world datasets typically fall short of those achieved on benchmark datasets,underscoring the complexity of real-world multi-label classification tasks.Approaches leveraging deep learning techniques offer promising solutions by accommodating the hierarchical relationships and interdependencies among labels.With ongoing enhancements in deep learning algorithms and large-scale models,it is expected that the efficacy of multi-label classification tasks will be significantly improved,reaching a level of practical utility in the foreseeable future.Originality/value:(1)Seven multi-label classification methods are comprehensively compared on three real-world datasets.(2)The TextCNN and TextRCNN models perform better on small-scale datasets with more complex hierarchical structure of labels and more balanced document-label distribution.(3)The MLkNN method works better on the larger-scale dataset with more unbalanced document-label distribution.展开更多
The research aims to improve the performance of image recognition methods based on a description in the form of a set of keypoint descriptors.The main focus is on increasing the speed of establishing the relevance of ...The research aims to improve the performance of image recognition methods based on a description in the form of a set of keypoint descriptors.The main focus is on increasing the speed of establishing the relevance of object and etalon descriptions while maintaining the required level of classification efficiency.The class to be recognized is represented by an infinite set of images obtained from the etalon by applying arbitrary geometric transformations.It is proposed to reduce the descriptions for the etalon database by selecting the most significant descriptor components according to the information content criterion.The informativeness of an etalon descriptor is estimated by the difference of the closest distances to its own and other descriptions.The developed method determines the relevance of the full description of the recognized object with the reduced description of the etalons.Several practical models of the classifier with different options for establishing the correspondence between object descriptors and etalons are considered.The results of the experimental modeling of the proposed methods for a database including images of museum jewelry are presented.The test sample is formed as a set of images from the etalon database and out of the database with the application of geometric transformations of scale and rotation in the field of view.The practical problems of determining the threshold for the number of votes,based on which a classification decision is made,have been researched.Modeling has revealed the practical possibility of tenfold reducing descriptions with full preservation of classification accuracy.Reducing the descriptions by twenty times in the experiment leads to slightly decreased accuracy.The speed of the analysis increases in proportion to the degree of reduction.The use of reduction by the informativeness criterion confirmed the possibility of obtaining the most significant subset of features for classification,which guarantees a decent level of accuracy.展开更多
Due to the limited computational capability and the diversity of the Internet of Things devices working in different environment,we consider fewshot learning-based automatic modulation classification(AMC)to improve it...Due to the limited computational capability and the diversity of the Internet of Things devices working in different environment,we consider fewshot learning-based automatic modulation classification(AMC)to improve its reliability.A data enhancement module(DEM)is designed by a convolutional layer to supplement frequency-domain information as well as providing nonlinear mapping that is beneficial for AMC.Multimodal network is designed to have multiple residual blocks,where each residual block has multiple convolutional kernels of different sizes for diverse feature extraction.Moreover,a deep supervised loss function is designed to supervise all parts of the network including the hidden layers and the DEM.Since different model may output different results,cooperative classifier is designed to avoid the randomness of single model and improve the reliability.Simulation results show that this few-shot learning-based AMC method can significantly improve the AMC accuracy compared to the existing methods.展开更多
Condensed and hydrolysable tannins are non-toxic natural polyphenols that are a commercial commodity industrialized for tanning hides to obtain leather and for a growing number of other industrial applications mainly ...Condensed and hydrolysable tannins are non-toxic natural polyphenols that are a commercial commodity industrialized for tanning hides to obtain leather and for a growing number of other industrial applications mainly to substitute petroleum-based products.They are a definite class of sustainable materials of the forestry industry.They have been in operation for hundreds of years to manufacture leather and now for a growing number of applications in a variety of other industries,such as wood adhesives,metal coating,pharmaceutical/medical applications and several others.This review presents the main sources,either already or potentially commercial of this forestry by-materials,their industrial and laboratory extraction systems,their systems of analysis with their advantages and drawbacks,be these methods so simple to even appear primitive but nonetheless of proven effectiveness,or very modern and instrumental.It constitutes a basic but essential summary of what is necessary to know of these sustainable materials.In doing so,the review highlights some of the main challenges that remain to be addressed to deliver the quality and economics of tannin supply necessary to fulfill the industrial production requirements for some materials-based uses.展开更多
When building a classification model,the scenario where the samples of one class are significantly more than those of the other class is called data imbalance.Data imbalance causes the trained classification model to ...When building a classification model,the scenario where the samples of one class are significantly more than those of the other class is called data imbalance.Data imbalance causes the trained classification model to be in favor of the majority class(usually defined as the negative class),which may do harm to the accuracy of the minority class(usually defined as the positive class),and then lead to poor overall performance of the model.A method called MSHR-FCSSVM for solving imbalanced data classification is proposed in this article,which is based on a new hybrid resampling approach(MSHR)and a new fine cost-sensitive support vector machine(CS-SVM)classifier(FCSSVM).The MSHR measures the separability of each negative sample through its Silhouette value calculated by Mahalanobis distance between samples,based on which,the so-called pseudo-negative samples are screened out to generate new positive samples(over-sampling step)through linear interpolation and are deleted finally(under-sampling step).This approach replaces pseudo-negative samples with generated new positive samples one by one to clear up the inter-class overlap on the borderline,without changing the overall scale of the dataset.The FCSSVM is an improved version of the traditional CS-SVM.It considers influences of both the imbalance of sample number and the class distribution on classification simultaneously,and through finely tuning the class cost weights by using the efficient optimization algorithm based on the physical phenomenon of rime-ice(RIME)algorithm with cross-validation accuracy as the fitness function to accurately adjust the classification borderline.To verify the effectiveness of the proposed method,a series of experiments are carried out based on 20 imbalanced datasets including both mildly and extremely imbalanced datasets.The experimental results show that the MSHR-FCSSVM method performs better than the methods for comparison in most cases,and both the MSHR and the FCSSVM played significant roles.展开更多
The technology of drilling tests makes it possible to obtain the strength parameter of rock accurately in situ. In this paper, a new rock cutting analysis model that considers the influence of the rock crushing zone(R...The technology of drilling tests makes it possible to obtain the strength parameter of rock accurately in situ. In this paper, a new rock cutting analysis model that considers the influence of the rock crushing zone(RCZ) is built. The formula for an ultimate cutting force is established based on the limit equilibrium principle. The relationship between digital drilling parameters(DDP) and the c-φ parameter(DDP-cφ formula, where c refers to the cohesion and φ refers to the internal friction angle) is derived, and the response of drilling parameters and cutting ratio to the strength parameters is analyzed. The drillingbased measuring method for the c-φ parameter of rock is constructed. The laboratory verification test is then completed, and the difference in results between the drilling test and the compression test is less than 6%. On this basis, in-situ rock drilling tests in a traffic tunnel and a coal mine roadway are carried out, and the strength parameters of the surrounding rock are effectively tested. The average difference ratio of the results is less than 11%, which verifies the effectiveness of the proposed method for obtaining the strength parameters based on digital drilling. This study provides methodological support for field testing of rock strength parameters.展开更多
In this study,the structural characters,antioxidant activities and bile acid-binding ability of sea buckthorn polysaccharides(HRPs)obtained by the commonly used hot water(HRP-W),pressurized hot water(HRP-H),ultrasonic...In this study,the structural characters,antioxidant activities and bile acid-binding ability of sea buckthorn polysaccharides(HRPs)obtained by the commonly used hot water(HRP-W),pressurized hot water(HRP-H),ultrasonic(HRP-U),acid(HRP-C)and alkali(HRP-A)assisted extraction methods were investigated.The results demonstrated that extraction methods had significant effects on extraction yield,monosaccharide composition,molecular weight,particle size,triple-helical structure,and surface morphology of HRPs except for the major linkage bands.Thermogravimetric analysis showed that HRP-U with filamentous reticular microstructure exhibited better thermal stability.The HRP-A with the lowest molecular weight and highest arabinose content possessed the best antioxidant activities.Moreover,the rheological analysis indicated that HRPs with higher galacturonic acid content and molecular weight showed higher viscosity and stronger crosslinking network(HRP-C,HRP-W and HRP-U),which exhibited stronger bile acid binding capacity.The present findings provide scientific evidence in the preparation technology of sea buckthorn polysaccharides with good antioxidant and bile acid binding capacity which are related to the structure affected by the extraction methods.展开更多
The network of Himalayan roadways and highways connects some remote regions of valleys or hill slopes,which is vital for India’s socio-economic growth.Due to natural and artificial factors,frequency of slope instabil...The network of Himalayan roadways and highways connects some remote regions of valleys or hill slopes,which is vital for India’s socio-economic growth.Due to natural and artificial factors,frequency of slope instabilities along the networks has been increasing over last few decades.Assessment of stability of natural and artificial slopes due to construction of these connecting road networks is significant in safely executing these roads throughout the year.Several rock mass classification methods are generally used to assess the strength and deformability of rock mass.This study assesses slope stability along the NH-1A of Ramban district of North Western Himalayas.Various structurally and non-structurally controlled rock mass classification systems have been applied to assess the stability conditions of 14 slopes.For evaluating the stability of these slopes,kinematic analysis was performed along with geological strength index(GSI),rock mass rating(RMR),continuous slope mass rating(CoSMR),slope mass rating(SMR),and Q-slope in the present study.The SMR gives three slopes as completely unstable while CoSMR suggests four slopes as completely unstable.The stability of all slopes was also analyzed using a design chart under dynamic and static conditions by slope stability rating(SSR)for the factor of safety(FoS)of 1.2 and 1 respectively.Q-slope with probability of failure(PoF)1%gives two slopes as stable slopes.Stable slope angle has been determined based on the Q-slope safe angle equation and SSR design chart based on the FoS.The value ranges given by different empirical classifications were RMR(37-74),GSI(27.3-58.5),SMR(11-59),and CoSMR(3.39-74.56).Good relationship was found among RMR&SSR and RMR&GSI with correlation coefficient(R 2)value of 0.815 and 0.6866,respectively.Lastly,a comparative stability of all these slopes based on the above classification has been performed to identify the most critical slope along this road.展开更多
Background: Cavernous transformation of the portal vein(CTPV) due to portal vein obstruction is a rare vascular anomaly defined as the formation of multiple collateral vessels in the hepatic hilum. This study aimed to...Background: Cavernous transformation of the portal vein(CTPV) due to portal vein obstruction is a rare vascular anomaly defined as the formation of multiple collateral vessels in the hepatic hilum. This study aimed to investigate the imaging features of intrahepatic portal vein in adult patients with CTPV and establish the relationship between the manifestations of intrahepatic portal vein and the progression of CTPV. Methods: We retrospectively analyzed 14 CTPV patients in Beijing Tsinghua Changgung Hospital. All patients underwent both direct portal venography(DPV) and computed tomography angiography(CTA) to reveal the manifestations of the portal venous system. The vessels measured included the left portal vein(LPV), right portal vein(RPV), main portal vein(MPV) and the portal vein bifurcation(PVB). Results: Nine males and 5 females, with a median age of 40.5 years, were included in the study. No significant difference was found in the diameters of the LPV or RPV measured by DPV and CTA. The visualization in terms of LPV, RPV and PVB measured by DPV was higher than that by CTA. There was a significant association between LPV/RPV and PVB/MPV in term of visibility revealed with DPV( P = 0.01), while this association was not observed with CTA. According to the imaging features of the portal vein measured by DPV, CTPV was classified into three categories to facilitate the diagnosis and treatment. Conclusions: DPV was more accurate than CTA for revealing the course of the intrahepatic portal vein in patients with CTPV. The classification of CTPV, that originated from the imaging features of the portal vein revealed by DPV, may provide a new perspective for the diagnosis and treatment of CTPV.展开更多
The material point method(MPM)has been gaining increasing popularity as an appropriate approach to the solution of coupled hydro-mechanical problems involving large deformation.In this paper,we survey the current stat...The material point method(MPM)has been gaining increasing popularity as an appropriate approach to the solution of coupled hydro-mechanical problems involving large deformation.In this paper,we survey the current state-of-the-art in the MPM simulation of hydro-mechanical behaviour in two-phase porous geomaterials.The review covers the recent advances and developments in the MPM and their extensions to capture the coupled hydro-mechanical problems involving large deformations.The focus of this review is aiming at providing a clear picture of what has or has not been developed or implemented for simulating two-phase coupled large deformation problems,which will provide some direct reference for both practitioners and researchers.展开更多
In this study, a new rain type classification algorithm for the Dual-Frequency Precipitation Radar(DPR) suitable over the Tibetan Plateau(TP) was proposed by analyzing Global Precipitation Measurement(GPM) DPR Level-2...In this study, a new rain type classification algorithm for the Dual-Frequency Precipitation Radar(DPR) suitable over the Tibetan Plateau(TP) was proposed by analyzing Global Precipitation Measurement(GPM) DPR Level-2 data in summer from 2014 to 2020. It was found that the DPR rain type classification algorithm(simply called DPR algorithm) has mis-identification problems in two aspects in summer TP. In the new algorithm of rain type classification in summer TP,four rain types are classified by using new thresholds, such as the maximum reflectivity factor, the difference between the maximum reflectivity factor and the background maximum reflectivity factor, and the echo top height. In the threshold of the maximum reflectivity factors, 30 d BZ and 18 d BZ are both thresholds to separate strong convective precipitation, weak convective precipitation and weak precipitation. The results illustrate obvious differences of radar reflectivity factor and vertical velocity among the three rain types in summer TP, such as the reflectivity factor of most strong convective precipitation distributes from 15 d BZ to near 35 d BZ from 4 km to 13 km, and increases almost linearly with the decrease in height. For most weak convective precipitation, the reflectivity factor distributes from 15 d BZ to 28 d BZ with the height from 4 km to 9 km. For weak precipitation, the reflectivity factor mainly distributes in range of 15–25 d BZ with height within 4–10 km. It is also shows that weak precipitation is the dominant rain type in summer TP, accounting for 40%–80%,followed by weak convective precipitation(25%–40%), and strong convective precipitation has the least proportion(less than 30%).展开更多
While encryption technology safeguards the security of network communications,malicious traffic also uses encryption protocols to obscure its malicious behavior.To address the issues of traditional machine learning me...While encryption technology safeguards the security of network communications,malicious traffic also uses encryption protocols to obscure its malicious behavior.To address the issues of traditional machine learning methods relying on expert experience and the insufficient representation capabilities of existing deep learning methods for encrypted malicious traffic,we propose an encrypted malicious traffic classification method that integrates global semantic features with local spatiotemporal features,called BERT-based Spatio-Temporal Features Network(BSTFNet).At the packet-level granularity,the model captures the global semantic features of packets through the attention mechanism of the Bidirectional Encoder Representations from Transformers(BERT)model.At the byte-level granularity,we initially employ the Bidirectional Gated Recurrent Unit(BiGRU)model to extract temporal features from bytes,followed by the utilization of the Text Convolutional Neural Network(TextCNN)model with multi-sized convolution kernels to extract local multi-receptive field spatial features.The fusion of features from both granularities serves as the ultimate multidimensional representation of malicious traffic.Our approach achieves accuracy and F1-score of 99.39%and 99.40%,respectively,on the publicly available USTC-TFC2016 dataset,and effectively reduces sample confusion within the Neris and Virut categories.The experimental results demonstrate that our method has outstanding representation and classification capabilities for encrypted malicious traffic.展开更多
Although disintegrated dolomite,widely distributed across the globe,has conventionally been a focus of research in underground engineering,the issue of slope stability issues in disintegrated dolomite strata is gainin...Although disintegrated dolomite,widely distributed across the globe,has conventionally been a focus of research in underground engineering,the issue of slope stability issues in disintegrated dolomite strata is gaining increasing prominence.This is primarily due to their unique properties,including low strength and loose structure.Current methods for evaluating slope stability,such as basic quality(BQ)and slope stability probability classification(SSPC),do not adequately account for the poor integrity and structural fragmentation characteristic of disintegrated dolomite.To address this challenge,an analysis of the applicability of the limit equilibrium method(LEM),BQ,and SSPC methods was conducted on eight disintegrated dolomite slopes located in Baoshan,Southwest China.However,conflicting results were obtained.Therefore,this paper introduces a novel method,SMRDDS,to provide rapid and accurate assessment of disintegrated dolomite slope stability.This method incorporates parameters such as disintegrated grade,joint state,groundwater conditions,and excavation methods.The findings reveal that six slopes exhibit stability,while two are considered partially unstable.Notably,the proposed method demonstrates a closer match with the actual conditions and is more time-efficient compared with the BQ and SSPC methods.However,due to the limited research on disintegrated dolomite slopes,the results of the SMRDDS method tend to be conservative as a safety precaution.In conclusion,the SMRDDS method can quickly evaluate the current situation of disintegrated dolomite slopes in the field.This contributes significantly to disaster risk reduction for disintegrated dolomite slopes.展开更多
In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken a...In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.展开更多
As a calculation method based on the Galerkin variation,the numerical manifold method(NMM)adopts a double covering system,which can easily deal with discontinuous deformation problems and has a high calculation accura...As a calculation method based on the Galerkin variation,the numerical manifold method(NMM)adopts a double covering system,which can easily deal with discontinuous deformation problems and has a high calculation accuracy.Aiming at the thermo-mechanical(TM)coupling problem of fractured rock masses,this study uses the NMM to simulate the processes of crack initiation and propagation in a rock mass under the influence of temperature field,deduces related system equations,and proposes a penalty function method to deal with boundary conditions.Numerical examples are employed to confirm the effectiveness and high accuracy of this method.By the thermal stress analysis of a thick-walled cylinder(TWC),the simulation of cracking in the TWC under heating and cooling conditions,and the simulation of thermal cracking of the SwedishÄspöPillar Stability Experiment(APSE)rock column,the thermal stress,and TM coupling are obtained.The numerical simulation results are in good agreement with the test data and other numerical results,thus verifying the effectiveness of the NMM in dealing with thermal stress and crack propagation problems of fractured rock masses.展开更多
This study presents a method for the inverse analysis of fluid flow problems.The focus is put on accurately determining boundary conditions and characterizing the physical properties of granular media,such as permeabi...This study presents a method for the inverse analysis of fluid flow problems.The focus is put on accurately determining boundary conditions and characterizing the physical properties of granular media,such as permeability,and fluid components,like viscosity.The primary aim is to deduce either constant pressure head or pressure profiles,given the known velocity field at a steady-state flow through a conduit containing obstacles,including walls,spheres,and grains.The lattice Boltzmann method(LBM)combined with automatic differentiation(AD)(AD-LBM)is employed,with the help of the GPU-capable Taichi programming language.A lightweight tape is used to generate gradients for the entire LBM simulation,enabling end-to-end backpropagation.Our AD-LBM approach accurately estimates the boundary conditions for complex flow paths in porous media,leading to observed steady-state velocity fields and deriving macro-scale permeability and fluid viscosity.The method demonstrates significant advantages in terms of prediction accuracy and computational efficiency,making it a powerful tool for solving inverse fluid flow problems in various applications.展开更多
Background: The robustness is a measurement of an analytical chemical method and its ability to contain unaffected by little with deliberate variation of analytical chemical method parameters. The analytical chemical ...Background: The robustness is a measurement of an analytical chemical method and its ability to contain unaffected by little with deliberate variation of analytical chemical method parameters. The analytical chemical method variation parameters are based on pH variability of buffer solution of mobile phase, organic ratio composition changes, stationary phase (column) manufacture, brand name and lot number variation;flow rate variation and temperature variation of chromatographic system. The analytical chemical method for assay of Atropine Sulfate conducted for robustness evaluation. The typical variation considered for mobile phase organic ratio change, change of pH, change of temperature, change of flow rate, change of column etc. Purpose: The aim of this study is to develop a cost effective, short run time and robust analytical chemical method for the assay quantification of Atropine in Pharmaceutical Ophthalmic Solution. This will help to make analytical decisions quickly for research and development scientists as well as will help with quality control product release for patient consumption. This analytical method will help to meet the market demand through quick quality control test of Atropine Ophthalmic Solution and it is very easy for maintaining (GDP) good documentation practices within the shortest period of time. Method: HPLC method has been selected for developing superior method to Compendial method. Both the compendial HPLC method and developed HPLC method was run into the same HPLC system to prove the superiority of developed method. Sensitivity, precision, reproducibility, accuracy parameters were considered for superiority of method. Mobile phase ratio change, pH of buffer solution, change of stationary phase temperature, change of flow rate and change of column were taken into consideration for robustness study of the developed method. Results: The limit of quantitation (LOQ) of developed method was much low than the compendial method. The % RSD for the six sample assay of developed method was 0.4% where the % RSD of the compendial method was 1.2%. The reproducibility between two analysts was 100.4% for developed method on the contrary the compendial method was 98.4%.展开更多
In this study,the vertical components of broadband teleseismic P wave data recorded by China Earthquake Network are used to image the rupture processes of the February 6th,2023 Turkish earthquake doublet via back proj...In this study,the vertical components of broadband teleseismic P wave data recorded by China Earthquake Network are used to image the rupture processes of the February 6th,2023 Turkish earthquake doublet via back projection analysis.Data in two frequency bands(0.5-2 Hz and 1-3 Hz)are used in the imaging processes.The results show that the rupture of the first event extends about 200 km to the northeast and about 150 km to the southwest,lasting~90 s in total.The southwestern rupture is triggered by the northeastern rupture,demonstrating a sequential bidirectional unilateral rupture pattern.The rupture of the second event extends approximately 80 km in both northeast and west directions,lasting~35 s in total and demonstrates a typical bilateral rupture feature.The cascading ruptures on both sides also reflect the occurrence of selective rupture behaviors on bifurcated faults.In addition,we observe super-shear ruptures on certain fault sections with relatively straight fault structures and sparse aftershocks.展开更多
The inverse and direct piezoelectric and circuit coupling are widely observed in advanced electro-mechanical systems such as piezoelectric energy harvesters.Existing strongly coupled analysis methods based on direct n...The inverse and direct piezoelectric and circuit coupling are widely observed in advanced electro-mechanical systems such as piezoelectric energy harvesters.Existing strongly coupled analysis methods based on direct numerical modeling for this phenomenon can be classified into partitioned or monolithic formulations.Each formulation has its advantages and disadvantages,and the choice depends on the characteristics of each coupled problem.This study proposes a new option:a coupled analysis strategy that combines the best features of the existing formulations,namely,the hybrid partitioned-monolithic method.The analysis of inverse piezoelectricity and the monolithic analysis of direct piezoelectric and circuit interaction are strongly coupled using a partitioned iterative hierarchical algorithm.In a typical benchmark problem of a piezoelectric energy harvester,this research compares the results from the proposed method to those from the conventional strongly coupled partitioned iterative method,discussing the accuracy,stability,and computational cost.The proposed hybrid concept is effective for coupled multi-physics problems,including various coupling conditions.展开更多
Background: Impurities are not expected in the final pharmaceutical products. All impurities should be regulated in both drug substances and drug products in accordance with pharmacopeias and ICH guidelines. Three dif...Background: Impurities are not expected in the final pharmaceutical products. All impurities should be regulated in both drug substances and drug products in accordance with pharmacopeias and ICH guidelines. Three different types of impurities are generally available in the pharmaceutical’s product specification: organic impurities, inorganic impurities, and residual solvents. Residual solvents are organic volatile chemicals used or generated during the manufacturing of drug substances or drug products. Purpose: The aim of this study is to develop a cost-effective gas chromatographic method for the identification and quantification of some commonly used solvents—methanol, acetone, isopropyl alcohol (IPA), methylene chloride, ethyl acetate, tetrahydrofuran (THF), benzene, toluene, and pyridine—in pharmaceutical product manufacturing. This method will be able to identify and quantify the multiple solvents within a single gas chromatographic procedure. Method: A gas chromatography (GC) equipped with a headspace sampler and a flame ionization detector, and a column DB 624, 30-meter-long × 0.32-millimeter internal diameter, 1,8 μm-thick, Brand-Agilent was used to develop this method. The initial GC oven temperature was 40°C and held for 5 minutes. It was then increase to 80˚C at a rate of 2˚C per minute, followed by a further increase to 225˚C at a rate of 30˚C per minute, with a final hold at 225˚C for 10 minutes. Nitrogen was used as a carrier gas at a flow rate of 1.20 mL per minute. Dimethyl sulfoxide (DMSO) was selected as sample solvent. Results: The developed method is precise and specific. The percent RSD for the areas of six replicate injections of this gas chromatographic method was within 10.0 and the recovery result found within 80.0% to 120.0%.展开更多
基金the Natural Science Foundation of China(Grant Numbers 72074014 and 72004012).
文摘Purpose:Many science,technology and innovation(STI)resources are attached with several different labels.To assign automatically the resulting labels to an interested instance,many approaches with good performance on the benchmark datasets have been proposed for multi-label classification task in the literature.Furthermore,several open-source tools implementing these approaches have also been developed.However,the characteristics of real-world multi-label patent and publication datasets are not completely in line with those of benchmark ones.Therefore,the main purpose of this paper is to evaluate comprehensively seven multi-label classification methods on real-world datasets.Research limitations:Three real-world datasets differ in the following aspects:statement,data quality,and purposes.Additionally,open-source tools designed for multi-label classification also have intrinsic differences in their approaches for data processing and feature selection,which in turn impacts the performance of a multi-label classification approach.In the near future,we will enhance experimental precision and reinforce the validity of conclusions by employing more rigorous control over variables through introducing expanded parameter settings.Practical implications:The observed Macro F1 and Micro F1 scores on real-world datasets typically fall short of those achieved on benchmark datasets,underscoring the complexity of real-world multi-label classification tasks.Approaches leveraging deep learning techniques offer promising solutions by accommodating the hierarchical relationships and interdependencies among labels.With ongoing enhancements in deep learning algorithms and large-scale models,it is expected that the efficacy of multi-label classification tasks will be significantly improved,reaching a level of practical utility in the foreseeable future.Originality/value:(1)Seven multi-label classification methods are comprehensively compared on three real-world datasets.(2)The TextCNN and TextRCNN models perform better on small-scale datasets with more complex hierarchical structure of labels and more balanced document-label distribution.(3)The MLkNN method works better on the larger-scale dataset with more unbalanced document-label distribution.
基金This research was funded by Prince Sattam bin Abdulaziz University(Project Number PSAU/2023/01/25387).
文摘The research aims to improve the performance of image recognition methods based on a description in the form of a set of keypoint descriptors.The main focus is on increasing the speed of establishing the relevance of object and etalon descriptions while maintaining the required level of classification efficiency.The class to be recognized is represented by an infinite set of images obtained from the etalon by applying arbitrary geometric transformations.It is proposed to reduce the descriptions for the etalon database by selecting the most significant descriptor components according to the information content criterion.The informativeness of an etalon descriptor is estimated by the difference of the closest distances to its own and other descriptions.The developed method determines the relevance of the full description of the recognized object with the reduced description of the etalons.Several practical models of the classifier with different options for establishing the correspondence between object descriptors and etalons are considered.The results of the experimental modeling of the proposed methods for a database including images of museum jewelry are presented.The test sample is formed as a set of images from the etalon database and out of the database with the application of geometric transformations of scale and rotation in the field of view.The practical problems of determining the threshold for the number of votes,based on which a classification decision is made,have been researched.Modeling has revealed the practical possibility of tenfold reducing descriptions with full preservation of classification accuracy.Reducing the descriptions by twenty times in the experiment leads to slightly decreased accuracy.The speed of the analysis increases in proportion to the degree of reduction.The use of reduction by the informativeness criterion confirmed the possibility of obtaining the most significant subset of features for classification,which guarantees a decent level of accuracy.
基金supported in part by National Key Research and Development Program of China under Grant 2021YFB2900404.
文摘Due to the limited computational capability and the diversity of the Internet of Things devices working in different environment,we consider fewshot learning-based automatic modulation classification(AMC)to improve its reliability.A data enhancement module(DEM)is designed by a convolutional layer to supplement frequency-domain information as well as providing nonlinear mapping that is beneficial for AMC.Multimodal network is designed to have multiple residual blocks,where each residual block has multiple convolutional kernels of different sizes for diverse feature extraction.Moreover,a deep supervised loss function is designed to supervise all parts of the network including the hidden layers and the DEM.Since different model may output different results,cooperative classifier is designed to avoid the randomness of single model and improve the reliability.Simulation results show that this few-shot learning-based AMC method can significantly improve the AMC accuracy compared to the existing methods.
文摘Condensed and hydrolysable tannins are non-toxic natural polyphenols that are a commercial commodity industrialized for tanning hides to obtain leather and for a growing number of other industrial applications mainly to substitute petroleum-based products.They are a definite class of sustainable materials of the forestry industry.They have been in operation for hundreds of years to manufacture leather and now for a growing number of applications in a variety of other industries,such as wood adhesives,metal coating,pharmaceutical/medical applications and several others.This review presents the main sources,either already or potentially commercial of this forestry by-materials,their industrial and laboratory extraction systems,their systems of analysis with their advantages and drawbacks,be these methods so simple to even appear primitive but nonetheless of proven effectiveness,or very modern and instrumental.It constitutes a basic but essential summary of what is necessary to know of these sustainable materials.In doing so,the review highlights some of the main challenges that remain to be addressed to deliver the quality and economics of tannin supply necessary to fulfill the industrial production requirements for some materials-based uses.
基金supported by the Yunnan Major Scientific and Technological Projects(Grant No.202302AD080001)the National Natural Science Foundation,China(No.52065033).
文摘When building a classification model,the scenario where the samples of one class are significantly more than those of the other class is called data imbalance.Data imbalance causes the trained classification model to be in favor of the majority class(usually defined as the negative class),which may do harm to the accuracy of the minority class(usually defined as the positive class),and then lead to poor overall performance of the model.A method called MSHR-FCSSVM for solving imbalanced data classification is proposed in this article,which is based on a new hybrid resampling approach(MSHR)and a new fine cost-sensitive support vector machine(CS-SVM)classifier(FCSSVM).The MSHR measures the separability of each negative sample through its Silhouette value calculated by Mahalanobis distance between samples,based on which,the so-called pseudo-negative samples are screened out to generate new positive samples(over-sampling step)through linear interpolation and are deleted finally(under-sampling step).This approach replaces pseudo-negative samples with generated new positive samples one by one to clear up the inter-class overlap on the borderline,without changing the overall scale of the dataset.The FCSSVM is an improved version of the traditional CS-SVM.It considers influences of both the imbalance of sample number and the class distribution on classification simultaneously,and through finely tuning the class cost weights by using the efficient optimization algorithm based on the physical phenomenon of rime-ice(RIME)algorithm with cross-validation accuracy as the fitness function to accurately adjust the classification borderline.To verify the effectiveness of the proposed method,a series of experiments are carried out based on 20 imbalanced datasets including both mildly and extremely imbalanced datasets.The experimental results show that the MSHR-FCSSVM method performs better than the methods for comparison in most cases,and both the MSHR and the FCSSVM played significant roles.
基金supported by the National Key Research and Development Program of China(No.2023YFC2907600)the National Natural Science Foundation of China(Nos.42077267,42277174 and 52074164)+2 种基金the Natural Science Foundation of Shandong Province,China(No.ZR2020JQ23)the Opening Project of State Key Laboratory of Explosion Science and Technology,Beijing Institute of Technology(No.KFJJ21-02Z)the Fundamental Research Funds for the Central Universities,China(No.2022JCCXSB03).
文摘The technology of drilling tests makes it possible to obtain the strength parameter of rock accurately in situ. In this paper, a new rock cutting analysis model that considers the influence of the rock crushing zone(RCZ) is built. The formula for an ultimate cutting force is established based on the limit equilibrium principle. The relationship between digital drilling parameters(DDP) and the c-φ parameter(DDP-cφ formula, where c refers to the cohesion and φ refers to the internal friction angle) is derived, and the response of drilling parameters and cutting ratio to the strength parameters is analyzed. The drillingbased measuring method for the c-φ parameter of rock is constructed. The laboratory verification test is then completed, and the difference in results between the drilling test and the compression test is less than 6%. On this basis, in-situ rock drilling tests in a traffic tunnel and a coal mine roadway are carried out, and the strength parameters of the surrounding rock are effectively tested. The average difference ratio of the results is less than 11%, which verifies the effectiveness of the proposed method for obtaining the strength parameters based on digital drilling. This study provides methodological support for field testing of rock strength parameters.
基金The Guangdong Basic and Applied Basic Research Foundation(2022A1515010730)National Natural Science Foundation of China(32001647)+2 种基金National Natural Science Foundation of China(31972022)Financial and moral assistance supported by the Guangdong Basic and Applied Basic Research Foundation(2019A1515011996)111 Project(B17018)。
文摘In this study,the structural characters,antioxidant activities and bile acid-binding ability of sea buckthorn polysaccharides(HRPs)obtained by the commonly used hot water(HRP-W),pressurized hot water(HRP-H),ultrasonic(HRP-U),acid(HRP-C)and alkali(HRP-A)assisted extraction methods were investigated.The results demonstrated that extraction methods had significant effects on extraction yield,monosaccharide composition,molecular weight,particle size,triple-helical structure,and surface morphology of HRPs except for the major linkage bands.Thermogravimetric analysis showed that HRP-U with filamentous reticular microstructure exhibited better thermal stability.The HRP-A with the lowest molecular weight and highest arabinose content possessed the best antioxidant activities.Moreover,the rheological analysis indicated that HRPs with higher galacturonic acid content and molecular weight showed higher viscosity and stronger crosslinking network(HRP-C,HRP-W and HRP-U),which exhibited stronger bile acid binding capacity.The present findings provide scientific evidence in the preparation technology of sea buckthorn polysaccharides with good antioxidant and bile acid binding capacity which are related to the structure affected by the extraction methods.
文摘The network of Himalayan roadways and highways connects some remote regions of valleys or hill slopes,which is vital for India’s socio-economic growth.Due to natural and artificial factors,frequency of slope instabilities along the networks has been increasing over last few decades.Assessment of stability of natural and artificial slopes due to construction of these connecting road networks is significant in safely executing these roads throughout the year.Several rock mass classification methods are generally used to assess the strength and deformability of rock mass.This study assesses slope stability along the NH-1A of Ramban district of North Western Himalayas.Various structurally and non-structurally controlled rock mass classification systems have been applied to assess the stability conditions of 14 slopes.For evaluating the stability of these slopes,kinematic analysis was performed along with geological strength index(GSI),rock mass rating(RMR),continuous slope mass rating(CoSMR),slope mass rating(SMR),and Q-slope in the present study.The SMR gives three slopes as completely unstable while CoSMR suggests four slopes as completely unstable.The stability of all slopes was also analyzed using a design chart under dynamic and static conditions by slope stability rating(SSR)for the factor of safety(FoS)of 1.2 and 1 respectively.Q-slope with probability of failure(PoF)1%gives two slopes as stable slopes.Stable slope angle has been determined based on the Q-slope safe angle equation and SSR design chart based on the FoS.The value ranges given by different empirical classifications were RMR(37-74),GSI(27.3-58.5),SMR(11-59),and CoSMR(3.39-74.56).Good relationship was found among RMR&SSR and RMR&GSI with correlation coefficient(R 2)value of 0.815 and 0.6866,respectively.Lastly,a comparative stability of all these slopes based on the above classification has been performed to identify the most critical slope along this road.
文摘Background: Cavernous transformation of the portal vein(CTPV) due to portal vein obstruction is a rare vascular anomaly defined as the formation of multiple collateral vessels in the hepatic hilum. This study aimed to investigate the imaging features of intrahepatic portal vein in adult patients with CTPV and establish the relationship between the manifestations of intrahepatic portal vein and the progression of CTPV. Methods: We retrospectively analyzed 14 CTPV patients in Beijing Tsinghua Changgung Hospital. All patients underwent both direct portal venography(DPV) and computed tomography angiography(CTA) to reveal the manifestations of the portal venous system. The vessels measured included the left portal vein(LPV), right portal vein(RPV), main portal vein(MPV) and the portal vein bifurcation(PVB). Results: Nine males and 5 females, with a median age of 40.5 years, were included in the study. No significant difference was found in the diameters of the LPV or RPV measured by DPV and CTA. The visualization in terms of LPV, RPV and PVB measured by DPV was higher than that by CTA. There was a significant association between LPV/RPV and PVB/MPV in term of visibility revealed with DPV( P = 0.01), while this association was not observed with CTA. According to the imaging features of the portal vein measured by DPV, CTPV was classified into three categories to facilitate the diagnosis and treatment. Conclusions: DPV was more accurate than CTA for revealing the course of the intrahepatic portal vein in patients with CTPV. The classification of CTPV, that originated from the imaging features of the portal vein revealed by DPV, may provide a new perspective for the diagnosis and treatment of CTPV.
基金The financial supports from National Outstanding Youth Science Fund Project of National Natural Science Foundation of China(Grant No.52022112)the International Postdoctoral Exchange Fellowship Program(Talent-Introduction Program,Grant No.YJ20220219)。
文摘The material point method(MPM)has been gaining increasing popularity as an appropriate approach to the solution of coupled hydro-mechanical problems involving large deformation.In this paper,we survey the current state-of-the-art in the MPM simulation of hydro-mechanical behaviour in two-phase porous geomaterials.The review covers the recent advances and developments in the MPM and their extensions to capture the coupled hydro-mechanical problems involving large deformations.The focus of this review is aiming at providing a clear picture of what has or has not been developed or implemented for simulating two-phase coupled large deformation problems,which will provide some direct reference for both practitioners and researchers.
基金funded by the National Natural Science Foundation of China project (Grant Nos.42275140, 42230612, 91837310, 92037000)the Second Tibetan Plateau Scientific Expedition and Research (STEP) program(Grant No. 2019QZKK0104)。
文摘In this study, a new rain type classification algorithm for the Dual-Frequency Precipitation Radar(DPR) suitable over the Tibetan Plateau(TP) was proposed by analyzing Global Precipitation Measurement(GPM) DPR Level-2 data in summer from 2014 to 2020. It was found that the DPR rain type classification algorithm(simply called DPR algorithm) has mis-identification problems in two aspects in summer TP. In the new algorithm of rain type classification in summer TP,four rain types are classified by using new thresholds, such as the maximum reflectivity factor, the difference between the maximum reflectivity factor and the background maximum reflectivity factor, and the echo top height. In the threshold of the maximum reflectivity factors, 30 d BZ and 18 d BZ are both thresholds to separate strong convective precipitation, weak convective precipitation and weak precipitation. The results illustrate obvious differences of radar reflectivity factor and vertical velocity among the three rain types in summer TP, such as the reflectivity factor of most strong convective precipitation distributes from 15 d BZ to near 35 d BZ from 4 km to 13 km, and increases almost linearly with the decrease in height. For most weak convective precipitation, the reflectivity factor distributes from 15 d BZ to 28 d BZ with the height from 4 km to 9 km. For weak precipitation, the reflectivity factor mainly distributes in range of 15–25 d BZ with height within 4–10 km. It is also shows that weak precipitation is the dominant rain type in summer TP, accounting for 40%–80%,followed by weak convective precipitation(25%–40%), and strong convective precipitation has the least proportion(less than 30%).
基金This research was funded by National Natural Science Foundation of China under Grant No.61806171Sichuan University of Science&Engineering Talent Project under Grant No.2021RC15+2 种基金Open Fund Project of Key Laboratory for Non-Destructive Testing and Engineering Computer of Sichuan Province Universities on Bridge Inspection and Engineering under Grant No.2022QYJ06Sichuan University of Science&Engineering Graduate Student Innovation Fund under Grant No.Y2023115The Scientific Research and Innovation Team Program of Sichuan University of Science and Technology under Grant No.SUSE652A006.
文摘While encryption technology safeguards the security of network communications,malicious traffic also uses encryption protocols to obscure its malicious behavior.To address the issues of traditional machine learning methods relying on expert experience and the insufficient representation capabilities of existing deep learning methods for encrypted malicious traffic,we propose an encrypted malicious traffic classification method that integrates global semantic features with local spatiotemporal features,called BERT-based Spatio-Temporal Features Network(BSTFNet).At the packet-level granularity,the model captures the global semantic features of packets through the attention mechanism of the Bidirectional Encoder Representations from Transformers(BERT)model.At the byte-level granularity,we initially employ the Bidirectional Gated Recurrent Unit(BiGRU)model to extract temporal features from bytes,followed by the utilization of the Text Convolutional Neural Network(TextCNN)model with multi-sized convolution kernels to extract local multi-receptive field spatial features.The fusion of features from both granularities serves as the ultimate multidimensional representation of malicious traffic.Our approach achieves accuracy and F1-score of 99.39%and 99.40%,respectively,on the publicly available USTC-TFC2016 dataset,and effectively reduces sample confusion within the Neris and Virut categories.The experimental results demonstrate that our method has outstanding representation and classification capabilities for encrypted malicious traffic.
基金supported by the National Natural Science Foundation of China(Grant No.42162026)the Applied Basic Research Foundation of Yunnan Province(Grant No.202201AT070083).
文摘Although disintegrated dolomite,widely distributed across the globe,has conventionally been a focus of research in underground engineering,the issue of slope stability issues in disintegrated dolomite strata is gaining increasing prominence.This is primarily due to their unique properties,including low strength and loose structure.Current methods for evaluating slope stability,such as basic quality(BQ)and slope stability probability classification(SSPC),do not adequately account for the poor integrity and structural fragmentation characteristic of disintegrated dolomite.To address this challenge,an analysis of the applicability of the limit equilibrium method(LEM),BQ,and SSPC methods was conducted on eight disintegrated dolomite slopes located in Baoshan,Southwest China.However,conflicting results were obtained.Therefore,this paper introduces a novel method,SMRDDS,to provide rapid and accurate assessment of disintegrated dolomite slope stability.This method incorporates parameters such as disintegrated grade,joint state,groundwater conditions,and excavation methods.The findings reveal that six slopes exhibit stability,while two are considered partially unstable.Notably,the proposed method demonstrates a closer match with the actual conditions and is more time-efficient compared with the BQ and SSPC methods.However,due to the limited research on disintegrated dolomite slopes,the results of the SMRDDS method tend to be conservative as a safety precaution.In conclusion,the SMRDDS method can quickly evaluate the current situation of disintegrated dolomite slopes in the field.This contributes significantly to disaster risk reduction for disintegrated dolomite slopes.
基金This work is funded by the National Natural Science Foundation of China(Grant Nos.42377164 and 52079062)the National Science Fund for Distinguished Young Scholars of China(Grant No.52222905).
文摘In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.
基金supported by the National Natural Science Foundation of China(Grant No.42277165)the Fundamental Research Funds for the Central Universities,China University of Geosciences(Wuhan)(Grant No.CUGCJ1821)the National Overseas Study Fund(Grant No.202106410040).
文摘As a calculation method based on the Galerkin variation,the numerical manifold method(NMM)adopts a double covering system,which can easily deal with discontinuous deformation problems and has a high calculation accuracy.Aiming at the thermo-mechanical(TM)coupling problem of fractured rock masses,this study uses the NMM to simulate the processes of crack initiation and propagation in a rock mass under the influence of temperature field,deduces related system equations,and proposes a penalty function method to deal with boundary conditions.Numerical examples are employed to confirm the effectiveness and high accuracy of this method.By the thermal stress analysis of a thick-walled cylinder(TWC),the simulation of cracking in the TWC under heating and cooling conditions,and the simulation of thermal cracking of the SwedishÄspöPillar Stability Experiment(APSE)rock column,the thermal stress,and TM coupling are obtained.The numerical simulation results are in good agreement with the test data and other numerical results,thus verifying the effectiveness of the NMM in dealing with thermal stress and crack propagation problems of fractured rock masses.
文摘This study presents a method for the inverse analysis of fluid flow problems.The focus is put on accurately determining boundary conditions and characterizing the physical properties of granular media,such as permeability,and fluid components,like viscosity.The primary aim is to deduce either constant pressure head or pressure profiles,given the known velocity field at a steady-state flow through a conduit containing obstacles,including walls,spheres,and grains.The lattice Boltzmann method(LBM)combined with automatic differentiation(AD)(AD-LBM)is employed,with the help of the GPU-capable Taichi programming language.A lightweight tape is used to generate gradients for the entire LBM simulation,enabling end-to-end backpropagation.Our AD-LBM approach accurately estimates the boundary conditions for complex flow paths in porous media,leading to observed steady-state velocity fields and deriving macro-scale permeability and fluid viscosity.The method demonstrates significant advantages in terms of prediction accuracy and computational efficiency,making it a powerful tool for solving inverse fluid flow problems in various applications.
文摘Background: The robustness is a measurement of an analytical chemical method and its ability to contain unaffected by little with deliberate variation of analytical chemical method parameters. The analytical chemical method variation parameters are based on pH variability of buffer solution of mobile phase, organic ratio composition changes, stationary phase (column) manufacture, brand name and lot number variation;flow rate variation and temperature variation of chromatographic system. The analytical chemical method for assay of Atropine Sulfate conducted for robustness evaluation. The typical variation considered for mobile phase organic ratio change, change of pH, change of temperature, change of flow rate, change of column etc. Purpose: The aim of this study is to develop a cost effective, short run time and robust analytical chemical method for the assay quantification of Atropine in Pharmaceutical Ophthalmic Solution. This will help to make analytical decisions quickly for research and development scientists as well as will help with quality control product release for patient consumption. This analytical method will help to meet the market demand through quick quality control test of Atropine Ophthalmic Solution and it is very easy for maintaining (GDP) good documentation practices within the shortest period of time. Method: HPLC method has been selected for developing superior method to Compendial method. Both the compendial HPLC method and developed HPLC method was run into the same HPLC system to prove the superiority of developed method. Sensitivity, precision, reproducibility, accuracy parameters were considered for superiority of method. Mobile phase ratio change, pH of buffer solution, change of stationary phase temperature, change of flow rate and change of column were taken into consideration for robustness study of the developed method. Results: The limit of quantitation (LOQ) of developed method was much low than the compendial method. The % RSD for the six sample assay of developed method was 0.4% where the % RSD of the compendial method was 1.2%. The reproducibility between two analysts was 100.4% for developed method on the contrary the compendial method was 98.4%.
基金supported by the National Key R&D Program of China(No.2022YFF0800601)National Scientific Foundation of China(Nos.41930103 and 41774047).
文摘In this study,the vertical components of broadband teleseismic P wave data recorded by China Earthquake Network are used to image the rupture processes of the February 6th,2023 Turkish earthquake doublet via back projection analysis.Data in two frequency bands(0.5-2 Hz and 1-3 Hz)are used in the imaging processes.The results show that the rupture of the first event extends about 200 km to the northeast and about 150 km to the southwest,lasting~90 s in total.The southwestern rupture is triggered by the northeastern rupture,demonstrating a sequential bidirectional unilateral rupture pattern.The rupture of the second event extends approximately 80 km in both northeast and west directions,lasting~35 s in total and demonstrates a typical bilateral rupture feature.The cascading ruptures on both sides also reflect the occurrence of selective rupture behaviors on bifurcated faults.In addition,we observe super-shear ruptures on certain fault sections with relatively straight fault structures and sparse aftershocks.
基金supported by the Japan Society for the Promotion of Science,KAKENHI Grant No.23H00475.
文摘The inverse and direct piezoelectric and circuit coupling are widely observed in advanced electro-mechanical systems such as piezoelectric energy harvesters.Existing strongly coupled analysis methods based on direct numerical modeling for this phenomenon can be classified into partitioned or monolithic formulations.Each formulation has its advantages and disadvantages,and the choice depends on the characteristics of each coupled problem.This study proposes a new option:a coupled analysis strategy that combines the best features of the existing formulations,namely,the hybrid partitioned-monolithic method.The analysis of inverse piezoelectricity and the monolithic analysis of direct piezoelectric and circuit interaction are strongly coupled using a partitioned iterative hierarchical algorithm.In a typical benchmark problem of a piezoelectric energy harvester,this research compares the results from the proposed method to those from the conventional strongly coupled partitioned iterative method,discussing the accuracy,stability,and computational cost.The proposed hybrid concept is effective for coupled multi-physics problems,including various coupling conditions.
文摘Background: Impurities are not expected in the final pharmaceutical products. All impurities should be regulated in both drug substances and drug products in accordance with pharmacopeias and ICH guidelines. Three different types of impurities are generally available in the pharmaceutical’s product specification: organic impurities, inorganic impurities, and residual solvents. Residual solvents are organic volatile chemicals used or generated during the manufacturing of drug substances or drug products. Purpose: The aim of this study is to develop a cost-effective gas chromatographic method for the identification and quantification of some commonly used solvents—methanol, acetone, isopropyl alcohol (IPA), methylene chloride, ethyl acetate, tetrahydrofuran (THF), benzene, toluene, and pyridine—in pharmaceutical product manufacturing. This method will be able to identify and quantify the multiple solvents within a single gas chromatographic procedure. Method: A gas chromatography (GC) equipped with a headspace sampler and a flame ionization detector, and a column DB 624, 30-meter-long × 0.32-millimeter internal diameter, 1,8 μm-thick, Brand-Agilent was used to develop this method. The initial GC oven temperature was 40°C and held for 5 minutes. It was then increase to 80˚C at a rate of 2˚C per minute, followed by a further increase to 225˚C at a rate of 30˚C per minute, with a final hold at 225˚C for 10 minutes. Nitrogen was used as a carrier gas at a flow rate of 1.20 mL per minute. Dimethyl sulfoxide (DMSO) was selected as sample solvent. Results: The developed method is precise and specific. The percent RSD for the areas of six replicate injections of this gas chromatographic method was within 10.0 and the recovery result found within 80.0% to 120.0%.