A correct and timely fault diagnosis is important for improving the safety and reliability of chemical processes. With the advancement of big data technology, data-driven fault diagnosis methods are being extensively ...A correct and timely fault diagnosis is important for improving the safety and reliability of chemical processes. With the advancement of big data technology, data-driven fault diagnosis methods are being extensively used and still have considerable potential. In recent years, methods based on deep neural networks have made significant breakthroughs, and fault diagnosis methods for industrial processes based on deep learning have attracted considerable research attention. Therefore, we propose a fusion deeplearning algorithm based on a fully convolutional neural network(FCN) to extract features and build models to correctly diagnose all types of faults. We use long short-term memory(LSTM) units to expand our proposed FCN so that our proposed deep learning model can better extract the time-domain features of chemical process data. We also introduce the attention mechanism into the model, aimed at highlighting the importance of features, which is significant for the fault diagnosis of chemical processes with many features. When applied to the benchmark Tennessee Eastman process, our proposed model exhibits impressive performance, demonstrating the effectiveness of the attention-based LSTM FCN in chemical process fault diagnosis.展开更多
Effective vibration recognition can improve the performance of vibration control and structural damage detection and is in high demand for signal processing and advanced classification.Signal-processing methods can ex...Effective vibration recognition can improve the performance of vibration control and structural damage detection and is in high demand for signal processing and advanced classification.Signal-processing methods can extract the potent time-frequency-domain characteristics of signals;however,the performance of conventional characteristics-based classification needs to be improved.Widely used deep learning algorithms(e.g.,convolutional neural networks(CNNs))can conduct classification by extracting high-dimensional data features,with outstanding performance.Hence,combining the advantages of signal processing and deep-learning algorithms can significantly enhance vibration recognition performance.A novel vibration recognition method based on signal processing and deep neural networks is proposed herein.First,environmental vibration signals are collected;then,signal processing is conducted to obtain the coefficient matrices of the time-frequency-domain characteristics using three typical algorithms:the wavelet transform,Hilbert-Huang transform,and Mel frequency cepstral coefficient extraction method.Subsequently,CNNs,long short-term memory(LSTM)networks,and combined deep CNN-LSTM networks are trained for vibration recognition,according to the time-frequencydomain characteristics.Finally,the performance of the trained deep neural networks is evaluated and validated.The results confirm the effectiveness of the proposed vibration recognition method combining signal preprocessing and deep learning.展开更多
More and more attention is paid to listening comprehension and learner factors nowadays, but less attention is paid to the effect of short-term memory. By analyzing the function and effect of short-term memory in the ...More and more attention is paid to listening comprehension and learner factors nowadays, but less attention is paid to the effect of short-term memory. By analyzing the function and effect of short-term memory in the process of information in the mind, this essay points out that short-term memory plays a vital role in listening comprehension, and puts forward three most effective ways to improve short-term memory retention.展开更多
Quality traceability plays an essential role in assembling and welding offshore platform blocks.The improvement of the welding quality traceability system is conducive to improving the durability of the offshore platf...Quality traceability plays an essential role in assembling and welding offshore platform blocks.The improvement of the welding quality traceability system is conducive to improving the durability of the offshore platform and the process level of the offshore industry.Currently,qualitymanagement remains in the era of primary information,and there is a lack of effective tracking and recording of welding quality data.When welding defects are encountered,it is difficult to rapidly and accurately determine the root cause of the problem from various complexities and scattered quality data.In this paper,a composite welding quality traceability model for offshore platform block construction process is proposed,it contains the quality early-warning method based on long short-term memory and quality data backtracking query optimization algorithm.By fulfilling the training of the early-warning model and the implementation of the query optimization algorithm,the quality traceability model has the ability to assist enterprises in realizing the rapid identification and positioning of quality problems.Furthermore,the model and the quality traceability algorithm are checked by cases in actual working conditions.Verification analyses suggest that the proposed early-warningmodel for welding quality and the algorithmfor optimizing backtracking requests are effective and can be applied to the actual construction process.展开更多
We present (on the 13<sup>th</sup> International Conference on Geology and Geophysics) the convincing evidence that the strongest earthquakes (according to the U.S. Geological Survey) of the Earth (during ...We present (on the 13<sup>th</sup> International Conference on Geology and Geophysics) the convincing evidence that the strongest earthquakes (according to the U.S. Geological Survey) of the Earth (during the range 2020 - 2023 AD) occurred near the predicted (calculated in advance based on the global prediction thermohydrogravidynamic principles determining the maximal temporal intensifications of the global seismotectonic, volcanic, climatic and magnetic processes of the Earth) dates 2020.016666667 AD (Simonenko, 2020), 2021.1 AD (Simonenko, 2019, 2020), 2022.18333333 AD (Simonenko, 2021), 2023.26666666 AD (Simonenko, 2022) and 2020.55 AD, 2021.65 AD (Simonenko, 2019, 2021), 2022.716666666 AD (Simonenko, 2022), respectively, corresponding to the local maximal and to the local minimal, respectively, combined planetary and solar integral energy gravitational influences on the internal rigid core of the Earth. We present the short-term thermohydrogravidynamic technology (based on the generalized differential formulation of the first law of thermodynamics and the first global prediction thermohydrogravidynamic principle) for evaluation of the maximal magnitude of the strongest (during the March, 2023 AD) earthquake of the Earth occurred on March 16, 2023 AD (according to the U.S. Geological Survey). .展开更多
Because of the low access to biodegradable organic substances used for denitrification,the partial nitrification-denitrification process has been considered as a low-cost,sustainable alternative for landfill leachate ...Because of the low access to biodegradable organic substances used for denitrification,the partial nitrification-denitrification process has been considered as a low-cost,sustainable alternative for landfill leachate treatment.In this study,the process upgrade from conventional to partial nitrificationdenitrification was comprehensively investigated in a full-scale landfill leachate treatment plant(LLTP).The partial nitrification-denitrification system was successfully achieved through the optimizing dissolved oxygen and the external carbon source,with effluent nitrogen concentrations lower than 150 mg/L.Moreover,the upgrading process facilitated the enrichment of Nitrosomonas(abundance increased from 0.4%to 3.3%),which was also evidenced by increased abundance of amoA/B/C genes carried by Nitrosomonas.Although Nitrospira(accounting for 0.1%-0.6%)was found to stably exist in the reactor tank,considerable nitrite accumulation occurred in the reactor(reaching 98.8 mg/L),indicating high-efficiency of the partial nitrification process.Moreover,the abundance of Thauera,the dominant denitrifying bacteria responsible for nitrite reduction,gradually increased from 0.60%to 5.52%during the upgrade process.This process caused great changes in the microbial community,inducing continuous succession of heterotrophic bacteria accompanied by enhanced metabolic potentials toward organic substances.The results obtained in this study advanced our understanding of the operation of a partial nitrification-denitrification system and provided a technical case for the upgrade of currently existing full-scale LLTPs.展开更多
Recent advancements in natural language processing have given rise to numerous pre-training language models in question-answering systems.However,with the constant evolution of algorithms,data,and computing power,the ...Recent advancements in natural language processing have given rise to numerous pre-training language models in question-answering systems.However,with the constant evolution of algorithms,data,and computing power,the increasing size and complexity of these models have led to increased training costs and reduced efficiency.This study aims to minimize the inference time of such models while maintaining computational performance.It also proposes a novel Distillation model for PAL-BERT(DPAL-BERT),specifically,employs knowledge distillation,using the PAL-BERT model as the teacher model to train two student models:DPAL-BERT-Bi and DPAL-BERTC.This research enhances the dataset through techniques such as masking,replacement,and n-gram sampling to optimize knowledge transfer.The experimental results showed that the distilled models greatly outperform models trained from scratch.In addition,although the distilled models exhibit a slight decrease in performance compared to PAL-BERT,they significantly reduce inference time to just 0.25%of the original.This demonstrates the effectiveness of the proposed approach in balancing model performance and efficiency.展开更多
Shortcut nitrification-denitrification(SCND)is widely concerned because of its low energy consumption and high nitrogen removal efficiency.However,the current difficulty lies in the stable maintenance of SCND performa...Shortcut nitrification-denitrification(SCND)is widely concerned because of its low energy consumption and high nitrogen removal efficiency.However,the current difficulty lies in the stable maintenance of SCND performance,which leads to the challenge of large-scale application of this new denitrification technology.In this study,the nitrogen removal pathway from complete nitrification-denitrification(CND)to SCND was rapidly realized under high free ammonia(FA),high pH and low dissolved oxygen(DO)conditions.The variations of specific oxygen uptake rate(SOUR)of activated sludge in both processes were investigated by an online SOUR monitoring device.Different curves of SOUR from CND to SCND process were observed,and the ammonia peak obtained based on SOUR monitoring could be used to control aeration time accurately in SCND process.Accordingly,the SOUR ratio of ammonia oxidizing bacteria(AOB)to nitrite oxidizing bacteria(NOB)(SOURAOB/SOURNOB)was increased from 1.40 to 2.93.16S rRNA Miseq high throughput sequencing revealed the dynamics of AOB and NOB,and the ratio of relative abundance(AOB/NOB)was increased from 1.03 to 3.12.Besides,SOURAOB/SOURNOB displayed significant correlations to ammonia removal rate(P<0.05),ammonia oxidation rate/nitrite oxidation rate(P<0.05),nitrite accumulation rate(P<0.05)and the relative abundance of AOB/NOB(P<0.05).Thus,a strategy for evaluation the SCND process stability based on online SOUR monitoring is proposed,which provides a theoretical basis for optimizing the SCND performance.展开更多
It is recognized that a city with a livable environment can bring happiness to residents.In this study,we explored the social media users’emotional states in their current living spaces and found out the relationship...It is recognized that a city with a livable environment can bring happiness to residents.In this study,we explored the social media users’emotional states in their current living spaces and found out the relationship between the social media users’emotions and urban livability.We adopt six urban livability indicators(including education,medical services,public facilities,leisure places,employment,and transportation)to construct city livable indices.Also,the Analytic Hierarchy Process(AHP)spatial statistic method is applied to identify and analyze the different habitable regions of Wuhan City.In terms of citizen’s emotion analysis,we use Long Short-Term Memory(LSTM)neural network to analyze the Weibo text and obtain the Weibo users’sentiment scores.The correlation analysis of residents’emotions and city livability results shows a positive correlation between the livable city areas(i.e.,the area with higher livable ranking indices)and Weibo users’sentiment scores(with a Pearson correlation coefficient of 0.881 and P-Value of 0.004).In other words,people who post Weibo in high livability areas of Wuhan express more positive emotional states.Still,emotion distribution varies in different regions,which is mainly caused by people’s distribution and the diversity of the city’s functional areas.展开更多
In this paper,we summarize recent progresses made in deep learning based acoustic models and the motivation and insights behind the surveyed techniques.We first discuss models such as recurrent neural networks(RNNs) a...In this paper,we summarize recent progresses made in deep learning based acoustic models and the motivation and insights behind the surveyed techniques.We first discuss models such as recurrent neural networks(RNNs) and convolutional neural networks(CNNs) that can effectively exploit variablelength contextual information,and their various combination with other models.We then describe models that are optimized end-to-end and emphasize on feature representations learned jointly with the rest of the system,the connectionist temporal classification(CTC) criterion,and the attention-based sequenceto-sequence translation model.We further illustrate robustness issues in speech recognition systems,and discuss acoustic model adaptation,speech enhancement and separation,and robust training strategies.We also cover modeling techniques that lead to more efficient decoding and discuss possible future directions in acoustic model research.展开更多
AIM:To utilise a comprehensive cognitive battery to gain a better understanding of cognitive performance in anorexia nervosa(AN).METHODS:Twenty-six individuals with AN and 27 healthy control participants matched for a...AIM:To utilise a comprehensive cognitive battery to gain a better understanding of cognitive performance in anorexia nervosa(AN).METHODS:Twenty-six individuals with AN and 27 healthy control participants matched for age,gender and premorbid intelligence,participated in the study.A standard cognitive battery,the Measurement and Treatment Research to Improve Cognition in Schizophrenia Consensus Cognitive Battery,was used to investigate performance on seven cognitive domains with the use of 10 different tasks:speed of processing[Brief Assessment Of Cognition In Schizophrenia:Symbol Coding,Category Fluency:Animal Naming(Fluency)and Trail Making Test:Part A],attention/vigilance[Continuous Performance Test-Identical Pairs(CPT-IP)],working memory[Wechsler Memory Scale(WMS?-Ⅲ):Spatial Span,and Letter-Number Span(LNS)],verbal learning[Hopkins Verbal Learning Test-Revised],visual learning[Brief Visuospatial Memory Test-Revised],reasoning and problem solving[Neuropsychological Assessment Battery:Mazes],and social cognition[Mayer-Salovey-Caruso Emotional Intelligence Test:Managing Emotions].Statistical analyses involved the use of multivariate and univariate analyses of variance.RESULTS:Analyses conducted on the cognitive domain scores revealed no overall significant difference between groups nor any interaction between group and domain score[F(1,45)=0.73,P=0.649].Analyses conducted on each of the specific tasks within the cognitive domains revealed significantly slower reaction times for false alarm responses on the CPT-IP task in AN[F(1,51)=12.80,P<0.01,Cohen’s d=0.982]and a trend towards poorer performance in AN on the backward component of the WMS?-ⅢSpatial Span task[F(1,51)=5.88,P=0.02,Cohen’s d=-0.665].The finding of slower reaction times of false alarm responses is,however,limited due to the small number of false alarm responses for either group.CONCLUSION:The findings are discussed in terms of poorer capacity to manipulate and process visuospatial material in AN.展开更多
Tropical cyclone(TC)annual frequency forecasting is significant for disaster prevention and mitigation in Guangdong Province.Based on the NCEP-NCAR reanalysis and NOAA Extended Reconstructed global sea surface tempera...Tropical cyclone(TC)annual frequency forecasting is significant for disaster prevention and mitigation in Guangdong Province.Based on the NCEP-NCAR reanalysis and NOAA Extended Reconstructed global sea surface temperature(SST)V5 data in winter,the TC frequency climatic features and prediction models have been studied.During 1951-2019,353 TCs directly affected Guangdong with an annual average of about 5.1.TCs have experienced an abrupt change from abundance to deficiency in the mid to late 1980 with a slightly decreasing trend and a normal distribution.338 primary precursors are obtained from statistically significant correlation regions of SST,sea level pressure,1000hPa air temperature,850hPa specific humidity,500hPa geopotential height and zonal wind shear in winter.Then those 338 primary factors are reduced into 19 independent predictors by principal component analysis(PCA).Furthermore,the Multiple Linear Regression(MLR),the Gaussian Process Regression(GPR)and the Long Short-term Memory Networks and Fully Connected Layers(LSTM-FC)models are constructed relying on the above 19 factors.For three different kinds of test sets from 2010 to 2019,2011 to 2019 and 2010 to 2019,the root mean square errors(RMSEs)of MLR,GPR and LSTM-FC between prediction and observations fluctuate within the range of 1.05-2.45,1.00-1.93 and 0.71-0.95 as well as the average absolute errors(AAEs)0.88-1.0,0.75-1.36 and 0.50-0.70,respectively.As for the 2010-2019 experiment,the mean deviations of the three model outputs from the observation are 0.89,0.78 and 0.56,together with the average evaluation scores 82.22,84.44 and 88.89,separately.The prediction skill comparisons unveil that LSTM-FC model has a better performance than MLR and GPR.In conclusion,the deep learning model of LSTM-FC may shed light on improving the accuracy of short-term climate prediction about TC frequency.The current research can provide experience on the development of deep learning in this field and help to achieve further progress of TC disaster prevention and mitigation in Guangdong Province.展开更多
In the task of multi-target stance detection,there are problems the mutual influence of content describing different targets,resulting in reduction in accuracy.To solve this problem,a multi-target stance detection alg...In the task of multi-target stance detection,there are problems the mutual influence of content describing different targets,resulting in reduction in accuracy.To solve this problem,a multi-target stance detection algorithm based on a bidirectional long short-term memory(Bi-LSTM)network with position-weight is proposed.First,the corresponding position of the target in the input text is calculated with the ultimate position-weight vector.Next,the position information and output from the Bi-LSTM layer are fused by the position-weight fusion layer.Finally,the stances of different targets are predicted using the LSTM network and softmax classification.The multi-target stance detection corpus of the American election in 2016 is used to validate the proposed method.The results demonstrate that the Bi-LSTM network with position-weight achieves an advantage of 1.4%in macro average F1 value in the comparison of recent algorithms.展开更多
College classes are becoming increasingly large.A critical component in scaling class size is the collaboration and interactions among instructors,teaching assistants,and students.We develop a prototype of an intellig...College classes are becoming increasingly large.A critical component in scaling class size is the collaboration and interactions among instructors,teaching assistants,and students.We develop a prototype of an intelligent voice instructorassistant system for supporting large classes,in which Amazon Web Services,Alexa Voice Services,and self-developed services are used.It uses a scraping service for reading the questions and answers from the past and current course discussion boards,organizes the questions in JavaScript object notation format,and stores them in the database,which can be accessed by Amazon web services Alexa skills.When a voice question from a student comes,Alexa is used for translating the voice sentence into texts.Then,Siamese deep long short-term memory model is introduced to calculate the similarity between the question asked and the questions in the database to find the best-matched answer.Questions with no match will be sent to the instructor,and instructor’s answer will be added into the database.Experiments show that the implemented model achieves promising results that can lead to a practical system.Intelligent voice instructor-assistant system starts with a small set of questions.It can grow through learning and improving when more and more questions are asked and answered.展开更多
Understanding the content of the source code and its regular expression is very difficult when they are written in an unfamiliar language.Pseudo-code explains and describes the content of the code without using syntax...Understanding the content of the source code and its regular expression is very difficult when they are written in an unfamiliar language.Pseudo-code explains and describes the content of the code without using syntax or programming language technologies.However,writing Pseudo-code to each code instruction is laborious.Recently,neural machine translation is used to generate textual descriptions for the source code.In this paper,a novel deep learning-based transformer(DLBT)model is proposed for automatic Pseudo-code generation from the source code.The proposed model uses deep learning which is based on Neural Machine Translation(NMT)to work as a language translator.The DLBT is based on the transformer which is an encoder-decoder structure.There are three major components:tokenizer and embeddings,transformer,and post-processing.Each code line is tokenized to dense vector.Then transformer captures the relatedness between the source code and the matching Pseudo-code without the need of Recurrent Neural Network(RNN).At the post-processing step,the generated Pseudo-code is optimized.The proposed model is assessed using a real Python dataset,which contains more than 18,800 lines of a source code written in Python.The experiments show promising performance results compared with other machine translation methods such as Recurrent Neural Network(RNN).The proposed DLBT records 47.32,68.49 accuracy and BLEU performance measures,respectively.展开更多
Three recent breakthroughs due to AI in arts and science serve as motivation:An award winning digital image,protein folding,fast matrix multiplication.Many recent developments in artificial neural networks,particularl...Three recent breakthroughs due to AI in arts and science serve as motivation:An award winning digital image,protein folding,fast matrix multiplication.Many recent developments in artificial neural networks,particularly deep learning(DL),applied and relevant to computational mechanics(solid,fluids,finite-element technology)are reviewed in detail.Both hybrid and pure machine learning(ML)methods are discussed.Hybrid methods combine traditional PDE discretizations with ML methods either(1)to help model complex nonlinear constitutive relations,(2)to nonlinearly reduce the model order for efficient simulation(turbulence),or(3)to accelerate the simulation by predicting certain components in the traditional integration methods.Here,methods(1)and(2)relied on Long-Short-Term Memory(LSTM)architecture,with method(3)relying on convolutional neural networks.Pure ML methods to solve(nonlinear)PDEs are represented by Physics-Informed Neural network(PINN)methods,which could be combined with attention mechanism to address discontinuous solutions.Both LSTM and attention architectures,together with modern and generalized classic optimizers to include stochasticity for DL networks,are extensively reviewed.Kernel machines,including Gaussian processes,are provided to sufficient depth for more advanced works such as shallow networks with infinite width.Not only addressing experts,readers are assumed familiar with computational mechanics,but not with DL,whose concepts and applications are built up from the basics,aiming at bringing first-time learners quickly to the forefront of research.History and limitations of AI are recounted and discussed,with particular attention at pointing out misstatements or misconceptions of the classics,even in well-known references.Positioning and pointing control of a large-deformable beam is given as an example.展开更多
An oxic-anoxic-oxic(O-A-O)system followed by coagulation and ozonation processes was used to study the treatment of coking wastewater.In the O-A-O process,the removals of NH4+-N,total nitrogen and COD were 91.5-93.3%,...An oxic-anoxic-oxic(O-A-O)system followed by coagulation and ozonation processes was used to study the treatment of coking wastewater.In the O-A-O process,the removals of NH4+-N,total nitrogen and COD were 91.5-93.3%,91.3-92.6%and 89.1-93.8%,respectively when employing hydraulic residence times of 60 h for the biochemical system.High removal of NH4+-N was obtained due to the placement of an aerobic tank in front of A-O system which can mitigate the inhibitory effect of toxic compounds in coking wastewater on nitrifying bacteria.Addition of methanol into the anoxic reactor greatly increased the removal of total nitrogen,indicating that denitrifiers can hardly use organic compounds in coking wastewater as carbon source for denitrification.COD values of the effluent from the O-A-O system were still higher than 260 mg/L even with a prolonged time of 160 h mainly due to the high refractory properties of residual compounds in the effluent.The subsequent coagulation and ozonation processes resulted in the COD removal of 91.5%-93.3%and reduced the relative abundance of large molecular weight(MW)organics(>1 kDa)from 55.8%to 20.93%with the ozone,PAC and PAM dosages of 100,150 and 4 mg/L respectively.Under these conditions,the COD value and concentration of polycyclic aromatic hydrocarbons in the final effluent were less than 80 and 0.05 mg/L,respectively,which meet the requirement of the Chinese emission standard.These results indicate that the combined technology of O-A-O process,coagulation and ozonation is a reliable way for the treatment of coking wastewater.展开更多
As a high efficiency hydrogen-to-power device,proton exchange membrane fuel cell(PEMFC)attracts much attention,especially for the automotive applications.Real-time prediction of output voltage and area specific resist...As a high efficiency hydrogen-to-power device,proton exchange membrane fuel cell(PEMFC)attracts much attention,especially for the automotive applications.Real-time prediction of output voltage and area specific resistance(ASR)via the on-board model is critical to monitor the health state of the automotive PEMFC stack.In this study,we use a transient PEMFC system model for dynamic process simulation of PEMFC to generate the dataset,and a long short-term memory(LSTM)deep learning model is developed to predict the dynamic per-formance of PEMFC.The results show that the developed LSTM deep learning model has much better perfor-mance than other models.A sensitivity analysis on the input features is performed,and three insensitive features are removed,that could slightly improve the prediction accuracy and significantly reduce the data volume.The neural structure,sequence duration,and sampling frequency are optimized.We find that the optimal sequence data duration for predicting ASR is 5 s or 20 s,and that for predicting output voltage is 40 s.The sampling frequency can be reduced from 10 Hz to 0.5 Hz and 0.25 Hz,which slightly affects the prediction accuracy,but obviously reduces the data volume and computation amount.展开更多
Technical debt is a metaphor for seeking short-term gains at expense of long-term code quality.Previous studies have shown that self-admitted technical debt,which is introduced intentionally,has strong negative impact...Technical debt is a metaphor for seeking short-term gains at expense of long-term code quality.Previous studies have shown that self-admitted technical debt,which is introduced intentionally,has strong negative impacts on software development and incurs high maintenance overheads.To help developers identify self-admitted technical debt,researchers have proposed many state-of-the-art methods.However,there is still room for improvement about the effectiveness of the current methods,as self-admitted technical debt comments have the characteristics of length variability,low proportion and style diversity.Therefore,in this paper,we propose a novel approach based on the bidirectional long short-term memory(BiLSTM)networks with the attention mechanism to automatically detect self-admitted technical debt by leveraging source code comments.In BiLSTM,we utilize a balanced cross entropy loss function to overcome the class unbalance problem.We experimentally investigate the performance of our approach on a public dataset including 62,566 code comments from ten open source projects.Experimental results show that our approach achieves 81.75%in terms of precision,72.24%in terms of recall and 75.86%in terms of F1-score on average and outperforms the state-of-the-art text mining-based method by 8.14%,5.49%and 6.64%,respectively.展开更多
Online short-term rental platforms,such as Airbnb,have been becoming popular,and a better pricing strategy is imperative for hosts of new listings.In this paper,we analyzed the relationship between the description of ...Online short-term rental platforms,such as Airbnb,have been becoming popular,and a better pricing strategy is imperative for hosts of new listings.In this paper,we analyzed the relationship between the description of each listing and its price,and proposed a text-based price recommendation system called TAPE to recommend a reasonable price for newly added listings.We used deep learning techniques(e.g.,feedforward network,long short-term memory,and mean shift)to design and implement TAPE.Using two chronologically extracted datasets of the same four cities,we revealed important factors(e.g.,indoor equipment and high-density area)that positively or negatively affect each property’s price,and evaluated our preliminary and enhanced models.Our models achieved a Root-Mean-Square Error(RMSE)of 33.73 in Boston,20.50 in London,34.68 in Los Angeles,and 26.31 in New York City,which are comparable to an existing model that uses more features.展开更多
文摘A correct and timely fault diagnosis is important for improving the safety and reliability of chemical processes. With the advancement of big data technology, data-driven fault diagnosis methods are being extensively used and still have considerable potential. In recent years, methods based on deep neural networks have made significant breakthroughs, and fault diagnosis methods for industrial processes based on deep learning have attracted considerable research attention. Therefore, we propose a fusion deeplearning algorithm based on a fully convolutional neural network(FCN) to extract features and build models to correctly diagnose all types of faults. We use long short-term memory(LSTM) units to expand our proposed FCN so that our proposed deep learning model can better extract the time-domain features of chemical process data. We also introduce the attention mechanism into the model, aimed at highlighting the importance of features, which is significant for the fault diagnosis of chemical processes with many features. When applied to the benchmark Tennessee Eastman process, our proposed model exhibits impressive performance, demonstrating the effectiveness of the attention-based LSTM FCN in chemical process fault diagnosis.
文摘Effective vibration recognition can improve the performance of vibration control and structural damage detection and is in high demand for signal processing and advanced classification.Signal-processing methods can extract the potent time-frequency-domain characteristics of signals;however,the performance of conventional characteristics-based classification needs to be improved.Widely used deep learning algorithms(e.g.,convolutional neural networks(CNNs))can conduct classification by extracting high-dimensional data features,with outstanding performance.Hence,combining the advantages of signal processing and deep-learning algorithms can significantly enhance vibration recognition performance.A novel vibration recognition method based on signal processing and deep neural networks is proposed herein.First,environmental vibration signals are collected;then,signal processing is conducted to obtain the coefficient matrices of the time-frequency-domain characteristics using three typical algorithms:the wavelet transform,Hilbert-Huang transform,and Mel frequency cepstral coefficient extraction method.Subsequently,CNNs,long short-term memory(LSTM)networks,and combined deep CNN-LSTM networks are trained for vibration recognition,according to the time-frequencydomain characteristics.Finally,the performance of the trained deep neural networks is evaluated and validated.The results confirm the effectiveness of the proposed vibration recognition method combining signal preprocessing and deep learning.
文摘More and more attention is paid to listening comprehension and learner factors nowadays, but less attention is paid to the effect of short-term memory. By analyzing the function and effect of short-term memory in the process of information in the mind, this essay points out that short-term memory plays a vital role in listening comprehension, and puts forward three most effective ways to improve short-term memory retention.
基金funded by Ministry of Industry and Information Technology of the People’s Republic of China[Grant No.2018473].
文摘Quality traceability plays an essential role in assembling and welding offshore platform blocks.The improvement of the welding quality traceability system is conducive to improving the durability of the offshore platform and the process level of the offshore industry.Currently,qualitymanagement remains in the era of primary information,and there is a lack of effective tracking and recording of welding quality data.When welding defects are encountered,it is difficult to rapidly and accurately determine the root cause of the problem from various complexities and scattered quality data.In this paper,a composite welding quality traceability model for offshore platform block construction process is proposed,it contains the quality early-warning method based on long short-term memory and quality data backtracking query optimization algorithm.By fulfilling the training of the early-warning model and the implementation of the query optimization algorithm,the quality traceability model has the ability to assist enterprises in realizing the rapid identification and positioning of quality problems.Furthermore,the model and the quality traceability algorithm are checked by cases in actual working conditions.Verification analyses suggest that the proposed early-warningmodel for welding quality and the algorithmfor optimizing backtracking requests are effective and can be applied to the actual construction process.
文摘We present (on the 13<sup>th</sup> International Conference on Geology and Geophysics) the convincing evidence that the strongest earthquakes (according to the U.S. Geological Survey) of the Earth (during the range 2020 - 2023 AD) occurred near the predicted (calculated in advance based on the global prediction thermohydrogravidynamic principles determining the maximal temporal intensifications of the global seismotectonic, volcanic, climatic and magnetic processes of the Earth) dates 2020.016666667 AD (Simonenko, 2020), 2021.1 AD (Simonenko, 2019, 2020), 2022.18333333 AD (Simonenko, 2021), 2023.26666666 AD (Simonenko, 2022) and 2020.55 AD, 2021.65 AD (Simonenko, 2019, 2021), 2022.716666666 AD (Simonenko, 2022), respectively, corresponding to the local maximal and to the local minimal, respectively, combined planetary and solar integral energy gravitational influences on the internal rigid core of the Earth. We present the short-term thermohydrogravidynamic technology (based on the generalized differential formulation of the first law of thermodynamics and the first global prediction thermohydrogravidynamic principle) for evaluation of the maximal magnitude of the strongest (during the March, 2023 AD) earthquake of the Earth occurred on March 16, 2023 AD (according to the U.S. Geological Survey). .
基金We acknowledge the National Key R&D Program of China(No.2017YFE0114300)Special Fund for Science and Technology Innovation Strategy of Guangdong Province(No.2018B020202)+1 种基金Natural Science Foundation of Guangdong Province(No.2018A030310246)National Natural Science Foundation of China(Nos.21607177,51622813 and 51808564)for financially supporting this study.
文摘Because of the low access to biodegradable organic substances used for denitrification,the partial nitrification-denitrification process has been considered as a low-cost,sustainable alternative for landfill leachate treatment.In this study,the process upgrade from conventional to partial nitrificationdenitrification was comprehensively investigated in a full-scale landfill leachate treatment plant(LLTP).The partial nitrification-denitrification system was successfully achieved through the optimizing dissolved oxygen and the external carbon source,with effluent nitrogen concentrations lower than 150 mg/L.Moreover,the upgrading process facilitated the enrichment of Nitrosomonas(abundance increased from 0.4%to 3.3%),which was also evidenced by increased abundance of amoA/B/C genes carried by Nitrosomonas.Although Nitrospira(accounting for 0.1%-0.6%)was found to stably exist in the reactor tank,considerable nitrite accumulation occurred in the reactor(reaching 98.8 mg/L),indicating high-efficiency of the partial nitrification process.Moreover,the abundance of Thauera,the dominant denitrifying bacteria responsible for nitrite reduction,gradually increased from 0.60%to 5.52%during the upgrade process.This process caused great changes in the microbial community,inducing continuous succession of heterotrophic bacteria accompanied by enhanced metabolic potentials toward organic substances.The results obtained in this study advanced our understanding of the operation of a partial nitrification-denitrification system and provided a technical case for the upgrade of currently existing full-scale LLTPs.
基金supported by Sichuan Science and Technology Program(2023YFSY0026,2023YFH0004).
文摘Recent advancements in natural language processing have given rise to numerous pre-training language models in question-answering systems.However,with the constant evolution of algorithms,data,and computing power,the increasing size and complexity of these models have led to increased training costs and reduced efficiency.This study aims to minimize the inference time of such models while maintaining computational performance.It also proposes a novel Distillation model for PAL-BERT(DPAL-BERT),specifically,employs knowledge distillation,using the PAL-BERT model as the teacher model to train two student models:DPAL-BERT-Bi and DPAL-BERTC.This research enhances the dataset through techniques such as masking,replacement,and n-gram sampling to optimize knowledge transfer.The experimental results showed that the distilled models greatly outperform models trained from scratch.In addition,although the distilled models exhibit a slight decrease in performance compared to PAL-BERT,they significantly reduce inference time to just 0.25%of the original.This demonstrates the effectiveness of the proposed approach in balancing model performance and efficiency.
基金This research was supported by Sichuan Key Point Research and Invention Program(Nos.2019YFS0502,2020YFS0026)Instrument Developing Project of the Chinese Academy of Sciences(No.YJKYYQ20180002)Project funded by China Postdoctoral Science Foundation(No.2020M673293).
文摘Shortcut nitrification-denitrification(SCND)is widely concerned because of its low energy consumption and high nitrogen removal efficiency.However,the current difficulty lies in the stable maintenance of SCND performance,which leads to the challenge of large-scale application of this new denitrification technology.In this study,the nitrogen removal pathway from complete nitrification-denitrification(CND)to SCND was rapidly realized under high free ammonia(FA),high pH and low dissolved oxygen(DO)conditions.The variations of specific oxygen uptake rate(SOUR)of activated sludge in both processes were investigated by an online SOUR monitoring device.Different curves of SOUR from CND to SCND process were observed,and the ammonia peak obtained based on SOUR monitoring could be used to control aeration time accurately in SCND process.Accordingly,the SOUR ratio of ammonia oxidizing bacteria(AOB)to nitrite oxidizing bacteria(NOB)(SOURAOB/SOURNOB)was increased from 1.40 to 2.93.16S rRNA Miseq high throughput sequencing revealed the dynamics of AOB and NOB,and the ratio of relative abundance(AOB/NOB)was increased from 1.03 to 3.12.Besides,SOURAOB/SOURNOB displayed significant correlations to ammonia removal rate(P<0.05),ammonia oxidation rate/nitrite oxidation rate(P<0.05),nitrite accumulation rate(P<0.05)and the relative abundance of AOB/NOB(P<0.05).Thus,a strategy for evaluation the SCND process stability based on online SOUR monitoring is proposed,which provides a theoretical basis for optimizing the SCND performance.
基金National Key Research and Development Program of China(No.2020YFB2103402)。
文摘It is recognized that a city with a livable environment can bring happiness to residents.In this study,we explored the social media users’emotional states in their current living spaces and found out the relationship between the social media users’emotions and urban livability.We adopt six urban livability indicators(including education,medical services,public facilities,leisure places,employment,and transportation)to construct city livable indices.Also,the Analytic Hierarchy Process(AHP)spatial statistic method is applied to identify and analyze the different habitable regions of Wuhan City.In terms of citizen’s emotion analysis,we use Long Short-Term Memory(LSTM)neural network to analyze the Weibo text and obtain the Weibo users’sentiment scores.The correlation analysis of residents’emotions and city livability results shows a positive correlation between the livable city areas(i.e.,the area with higher livable ranking indices)and Weibo users’sentiment scores(with a Pearson correlation coefficient of 0.881 and P-Value of 0.004).In other words,people who post Weibo in high livability areas of Wuhan express more positive emotional states.Still,emotion distribution varies in different regions,which is mainly caused by people’s distribution and the diversity of the city’s functional areas.
文摘In this paper,we summarize recent progresses made in deep learning based acoustic models and the motivation and insights behind the surveyed techniques.We first discuss models such as recurrent neural networks(RNNs) and convolutional neural networks(CNNs) that can effectively exploit variablelength contextual information,and their various combination with other models.We then describe models that are optimized end-to-end and emphasize on feature representations learned jointly with the rest of the system,the connectionist temporal classification(CTC) criterion,and the attention-based sequenceto-sequence translation model.We further illustrate robustness issues in speech recognition systems,and discuss acoustic model adaptation,speech enhancement and separation,and robust training strategies.We also cover modeling techniques that lead to more efficient decoding and discuss possible future directions in acoustic model research.
基金The Jack Brockhoff Foundation(3410)the Dick and Pip Smith Foundation+1 种基金Australian Postgraduate Awardthe David Hay Memorial Fund Award
文摘AIM:To utilise a comprehensive cognitive battery to gain a better understanding of cognitive performance in anorexia nervosa(AN).METHODS:Twenty-six individuals with AN and 27 healthy control participants matched for age,gender and premorbid intelligence,participated in the study.A standard cognitive battery,the Measurement and Treatment Research to Improve Cognition in Schizophrenia Consensus Cognitive Battery,was used to investigate performance on seven cognitive domains with the use of 10 different tasks:speed of processing[Brief Assessment Of Cognition In Schizophrenia:Symbol Coding,Category Fluency:Animal Naming(Fluency)and Trail Making Test:Part A],attention/vigilance[Continuous Performance Test-Identical Pairs(CPT-IP)],working memory[Wechsler Memory Scale(WMS?-Ⅲ):Spatial Span,and Letter-Number Span(LNS)],verbal learning[Hopkins Verbal Learning Test-Revised],visual learning[Brief Visuospatial Memory Test-Revised],reasoning and problem solving[Neuropsychological Assessment Battery:Mazes],and social cognition[Mayer-Salovey-Caruso Emotional Intelligence Test:Managing Emotions].Statistical analyses involved the use of multivariate and univariate analyses of variance.RESULTS:Analyses conducted on the cognitive domain scores revealed no overall significant difference between groups nor any interaction between group and domain score[F(1,45)=0.73,P=0.649].Analyses conducted on each of the specific tasks within the cognitive domains revealed significantly slower reaction times for false alarm responses on the CPT-IP task in AN[F(1,51)=12.80,P<0.01,Cohen’s d=0.982]and a trend towards poorer performance in AN on the backward component of the WMS?-ⅢSpatial Span task[F(1,51)=5.88,P=0.02,Cohen’s d=-0.665].The finding of slower reaction times of false alarm responses is,however,limited due to the small number of false alarm responses for either group.CONCLUSION:The findings are discussed in terms of poorer capacity to manipulate and process visuospatial material in AN.
基金National Key R&D Program of China(2017YFA0605004)Guangdong Major Project of Basic and Applied Basic Research(2020B0301030004)+4 种基金National Basic R&D Program of China(2018YFA0606203)Special Fund of China Meteorological Administration for Innovation and Development(CXFZ2021J026)Special Fund for Forecasters of China Meteorological Administration(CMAYBY2020-094)Graduate Independent Exploration and Innovation Project of Central South University(2021zzts0477)Science and Technology Planning Program of Guangdong Province(20180207)。
文摘Tropical cyclone(TC)annual frequency forecasting is significant for disaster prevention and mitigation in Guangdong Province.Based on the NCEP-NCAR reanalysis and NOAA Extended Reconstructed global sea surface temperature(SST)V5 data in winter,the TC frequency climatic features and prediction models have been studied.During 1951-2019,353 TCs directly affected Guangdong with an annual average of about 5.1.TCs have experienced an abrupt change from abundance to deficiency in the mid to late 1980 with a slightly decreasing trend and a normal distribution.338 primary precursors are obtained from statistically significant correlation regions of SST,sea level pressure,1000hPa air temperature,850hPa specific humidity,500hPa geopotential height and zonal wind shear in winter.Then those 338 primary factors are reduced into 19 independent predictors by principal component analysis(PCA).Furthermore,the Multiple Linear Regression(MLR),the Gaussian Process Regression(GPR)and the Long Short-term Memory Networks and Fully Connected Layers(LSTM-FC)models are constructed relying on the above 19 factors.For three different kinds of test sets from 2010 to 2019,2011 to 2019 and 2010 to 2019,the root mean square errors(RMSEs)of MLR,GPR and LSTM-FC between prediction and observations fluctuate within the range of 1.05-2.45,1.00-1.93 and 0.71-0.95 as well as the average absolute errors(AAEs)0.88-1.0,0.75-1.36 and 0.50-0.70,respectively.As for the 2010-2019 experiment,the mean deviations of the three model outputs from the observation are 0.89,0.78 and 0.56,together with the average evaluation scores 82.22,84.44 and 88.89,separately.The prediction skill comparisons unveil that LSTM-FC model has a better performance than MLR and GPR.In conclusion,the deep learning model of LSTM-FC may shed light on improving the accuracy of short-term climate prediction about TC frequency.The current research can provide experience on the development of deep learning in this field and help to achieve further progress of TC disaster prevention and mitigation in Guangdong Province.
基金Supported by the National Natural Science Foundation of China(No.61972040)the Science and Technology Projects of Beijing Municipal Education Commission(No.KM201711417011)the Premium Funding Project for Academic Human Resources Development in Beijing Union University(No.BPHR2020AZ03)。
文摘In the task of multi-target stance detection,there are problems the mutual influence of content describing different targets,resulting in reduction in accuracy.To solve this problem,a multi-target stance detection algorithm based on a bidirectional long short-term memory(Bi-LSTM)network with position-weight is proposed.First,the corresponding position of the target in the input text is calculated with the ultimate position-weight vector.Next,the position information and output from the Bi-LSTM layer are fused by the position-weight fusion layer.Finally,the stances of different targets are predicted using the LSTM network and softmax classification.The multi-target stance detection corpus of the American election in 2016 is used to validate the proposed method.The results demonstrate that the Bi-LSTM network with position-weight achieves an advantage of 1.4%in macro average F1 value in the comparison of recent algorithms.
基金The authors wish to thank their colleagues and students who were involved in this study and provided valuable implementation and technical support.The research is partly supported by general funding at IoT and Robotics Education Lab and FURI program at Arizona State University and is partly supported by China Scholarship Council,Guangdong Science and Technology Department,under Grant Number 2016A010101020,2016A010101021,and 2016A010101022Guangzhou Science and Information Bureau under Grant Number 201802010033.
文摘College classes are becoming increasingly large.A critical component in scaling class size is the collaboration and interactions among instructors,teaching assistants,and students.We develop a prototype of an intelligent voice instructorassistant system for supporting large classes,in which Amazon Web Services,Alexa Voice Services,and self-developed services are used.It uses a scraping service for reading the questions and answers from the past and current course discussion boards,organizes the questions in JavaScript object notation format,and stores them in the database,which can be accessed by Amazon web services Alexa skills.When a voice question from a student comes,Alexa is used for translating the voice sentence into texts.Then,Siamese deep long short-term memory model is introduced to calculate the similarity between the question asked and the questions in the database to find the best-matched answer.Questions with no match will be sent to the instructor,and instructor’s answer will be added into the database.Experiments show that the implemented model achieves promising results that can lead to a practical system.Intelligent voice instructor-assistant system starts with a small set of questions.It can grow through learning and improving when more and more questions are asked and answered.
文摘Understanding the content of the source code and its regular expression is very difficult when they are written in an unfamiliar language.Pseudo-code explains and describes the content of the code without using syntax or programming language technologies.However,writing Pseudo-code to each code instruction is laborious.Recently,neural machine translation is used to generate textual descriptions for the source code.In this paper,a novel deep learning-based transformer(DLBT)model is proposed for automatic Pseudo-code generation from the source code.The proposed model uses deep learning which is based on Neural Machine Translation(NMT)to work as a language translator.The DLBT is based on the transformer which is an encoder-decoder structure.There are three major components:tokenizer and embeddings,transformer,and post-processing.Each code line is tokenized to dense vector.Then transformer captures the relatedness between the source code and the matching Pseudo-code without the need of Recurrent Neural Network(RNN).At the post-processing step,the generated Pseudo-code is optimized.The proposed model is assessed using a real Python dataset,which contains more than 18,800 lines of a source code written in Python.The experiments show promising performance results compared with other machine translation methods such as Recurrent Neural Network(RNN).The proposed DLBT records 47.32,68.49 accuracy and BLEU performance measures,respectively.
文摘Three recent breakthroughs due to AI in arts and science serve as motivation:An award winning digital image,protein folding,fast matrix multiplication.Many recent developments in artificial neural networks,particularly deep learning(DL),applied and relevant to computational mechanics(solid,fluids,finite-element technology)are reviewed in detail.Both hybrid and pure machine learning(ML)methods are discussed.Hybrid methods combine traditional PDE discretizations with ML methods either(1)to help model complex nonlinear constitutive relations,(2)to nonlinearly reduce the model order for efficient simulation(turbulence),or(3)to accelerate the simulation by predicting certain components in the traditional integration methods.Here,methods(1)and(2)relied on Long-Short-Term Memory(LSTM)architecture,with method(3)relying on convolutional neural networks.Pure ML methods to solve(nonlinear)PDEs are represented by Physics-Informed Neural network(PINN)methods,which could be combined with attention mechanism to address discontinuous solutions.Both LSTM and attention architectures,together with modern and generalized classic optimizers to include stochasticity for DL networks,are extensively reviewed.Kernel machines,including Gaussian processes,are provided to sufficient depth for more advanced works such as shallow networks with infinite width.Not only addressing experts,readers are assumed familiar with computational mechanics,but not with DL,whose concepts and applications are built up from the basics,aiming at bringing first-time learners quickly to the forefront of research.History and limitations of AI are recounted and discussed,with particular attention at pointing out misstatements or misconceptions of the classics,even in well-known references.Positioning and pointing control of a large-deformable beam is given as an example.
基金the National Natural Science Foundation of China(Project No.20907072)for the financial support of this work.
文摘An oxic-anoxic-oxic(O-A-O)system followed by coagulation and ozonation processes was used to study the treatment of coking wastewater.In the O-A-O process,the removals of NH4+-N,total nitrogen and COD were 91.5-93.3%,91.3-92.6%and 89.1-93.8%,respectively when employing hydraulic residence times of 60 h for the biochemical system.High removal of NH4+-N was obtained due to the placement of an aerobic tank in front of A-O system which can mitigate the inhibitory effect of toxic compounds in coking wastewater on nitrifying bacteria.Addition of methanol into the anoxic reactor greatly increased the removal of total nitrogen,indicating that denitrifiers can hardly use organic compounds in coking wastewater as carbon source for denitrification.COD values of the effluent from the O-A-O system were still higher than 260 mg/L even with a prolonged time of 160 h mainly due to the high refractory properties of residual compounds in the effluent.The subsequent coagulation and ozonation processes resulted in the COD removal of 91.5%-93.3%and reduced the relative abundance of large molecular weight(MW)organics(>1 kDa)from 55.8%to 20.93%with the ozone,PAC and PAM dosages of 100,150 and 4 mg/L respectively.Under these conditions,the COD value and concentration of polycyclic aromatic hydrocarbons in the final effluent were less than 80 and 0.05 mg/L,respectively,which meet the requirement of the Chinese emission standard.These results indicate that the combined technology of O-A-O process,coagulation and ozonation is a reliable way for the treatment of coking wastewater.
基金This research is supported by the National Natural Science Founda-tion of China(No.52176196)the National Key Research and Devel-opment Program of China(No.2022YFE0103100)+1 种基金the China Postdoctoral Science Foundation(No.2021TQ0235)the Hong Kong Scholars Program(No.XJ2021033).
文摘As a high efficiency hydrogen-to-power device,proton exchange membrane fuel cell(PEMFC)attracts much attention,especially for the automotive applications.Real-time prediction of output voltage and area specific resistance(ASR)via the on-board model is critical to monitor the health state of the automotive PEMFC stack.In this study,we use a transient PEMFC system model for dynamic process simulation of PEMFC to generate the dataset,and a long short-term memory(LSTM)deep learning model is developed to predict the dynamic per-formance of PEMFC.The results show that the developed LSTM deep learning model has much better perfor-mance than other models.A sensitivity analysis on the input features is performed,and three insensitive features are removed,that could slightly improve the prediction accuracy and significantly reduce the data volume.The neural structure,sequence duration,and sampling frequency are optimized.We find that the optimal sequence data duration for predicting ASR is 5 s or 20 s,and that for predicting output voltage is 40 s.The sampling frequency can be reduced from 10 Hz to 0.5 Hz and 0.25 Hz,which slightly affects the prediction accuracy,but obviously reduces the data volume and computation amount.
基金This work was partially supported by the National Natural Science Foundation of China(Grants Nos.61100043,61902096 and 61702144)Key Project of Science and Technology of Zhejiang Province(2017C01010).
文摘Technical debt is a metaphor for seeking short-term gains at expense of long-term code quality.Previous studies have shown that self-admitted technical debt,which is introduced intentionally,has strong negative impacts on software development and incurs high maintenance overheads.To help developers identify self-admitted technical debt,researchers have proposed many state-of-the-art methods.However,there is still room for improvement about the effectiveness of the current methods,as self-admitted technical debt comments have the characteristics of length variability,low proportion and style diversity.Therefore,in this paper,we propose a novel approach based on the bidirectional long short-term memory(BiLSTM)networks with the attention mechanism to automatically detect self-admitted technical debt by leveraging source code comments.In BiLSTM,we utilize a balanced cross entropy loss function to overcome the class unbalance problem.We experimentally investigate the performance of our approach on a public dataset including 62,566 code comments from ten open source projects.Experimental results show that our approach achieves 81.75%in terms of precision,72.24%in terms of recall and 75.86%in terms of F1-score on average and outperforms the state-of-the-art text mining-based method by 8.14%,5.49%and 6.64%,respectively.
文摘Online short-term rental platforms,such as Airbnb,have been becoming popular,and a better pricing strategy is imperative for hosts of new listings.In this paper,we analyzed the relationship between the description of each listing and its price,and proposed a text-based price recommendation system called TAPE to recommend a reasonable price for newly added listings.We used deep learning techniques(e.g.,feedforward network,long short-term memory,and mean shift)to design and implement TAPE.Using two chronologically extracted datasets of the same four cities,we revealed important factors(e.g.,indoor equipment and high-density area)that positively or negatively affect each property’s price,and evaluated our preliminary and enhanced models.Our models achieved a Root-Mean-Square Error(RMSE)of 33.73 in Boston,20.50 in London,34.68 in Los Angeles,and 26.31 in New York City,which are comparable to an existing model that uses more features.