Various deep learning models have been proposed for the accurate assisted diagnosis of early-stage Alzheimer’s disease(AD).Most studies predominantly employ Convolutional Neural Networks(CNNs),which focus solely on l...Various deep learning models have been proposed for the accurate assisted diagnosis of early-stage Alzheimer’s disease(AD).Most studies predominantly employ Convolutional Neural Networks(CNNs),which focus solely on local features,thus encountering difficulties in handling global features.In contrast to natural images,Structural Magnetic Resonance Imaging(sMRI)images exhibit a higher number of channel dimensions.However,during the Position Embedding stage ofMulti Head Self Attention(MHSA),the coded information related to the channel dimension is disregarded.To tackle these issues,we propose theRepBoTNet-CESA network,an advanced AD-aided diagnostic model that is capable of learning local and global features simultaneously.It combines the advantages of CNN networks in capturing local information and Transformer networks in integrating global information,reducing computational costs while achieving excellent classification performance.Moreover,it uses the Cubic Embedding Self Attention(CESA)proposed in this paper to incorporate the channel code information,enhancing the classification performance within the Transformer structure.Finally,the RepBoTNet-CESA performs well in various AD-aided diagnosis tasks,with an accuracy of 96.58%,precision of 97.26%,and recall of 96.23%in the AD/NC task;an accuracy of 92.75%,precision of 92.84%,and recall of 93.18%in the EMCI/NC task;and an accuracy of 80.97%,precision of 83.86%,and recall of 80.91%in the AD/EMCI/LMCI/NC task.This demonstrates that RepBoTNet-CESA delivers outstanding outcomes in various AD-aided diagnostic tasks.Furthermore,our study has shown that MHSA exhibits superior performance compared to conventional attention mechanisms in enhancing ResNet performance.Besides,the Deeper RepBoTNet-CESA network fails to make further progress in AD-aided diagnostic tasks.展开更多
Introduction to Computer Science,as one of the fundamental courses in computer-related majors,plays an important role in the cultivation of computer professionals.However,traditional teaching models and content can no...Introduction to Computer Science,as one of the fundamental courses in computer-related majors,plays an important role in the cultivation of computer professionals.However,traditional teaching models and content can no longer fully meet the needs of modern information technology development.In response to these issues,this article introduces the concept of computational creative thinking,optimizes course content,adopts exploratory teaching methods,and innovates course assessment methods,aiming to comprehensively enhance students’computational thinking and innovative abilities.By continuously improving and promoting this teaching model,it will undoubtedly promote computer education in universities to a new level.展开更多
BACKGROUND Artificial intelligence(AI)has potential in the optical diagnosis of colorectal polyps.AIM To evaluate the feasibility of the real-time use of the computer-aided diagnosis system(CADx)AI for ColoRectal Poly...BACKGROUND Artificial intelligence(AI)has potential in the optical diagnosis of colorectal polyps.AIM To evaluate the feasibility of the real-time use of the computer-aided diagnosis system(CADx)AI for ColoRectal Polyps(AI4CRP)for the optical diagnosis of diminutive colorectal polyps and to compare the performance with CAD EYE^(TM)(Fujifilm,Tokyo,Japan).CADx influence on the optical diagnosis of an expert endoscopist was also investigated.METHODS AI4CRP was developed in-house and CAD EYE was proprietary software provided by Fujifilm.Both CADxsystems exploit convolutional neural networks.Colorectal polyps were characterized as benign or premalignant and histopathology was used as gold standard.AI4CRP provided an objective assessment of its characterization by presenting a calibrated confidence characterization value(range 0.0-1.0).A predefined cut-off value of 0.6 was set with values<0.6 indicating benign and values≥0.6 indicating premalignant colorectal polyps.Low confidence characterizations were defined as values 40%around the cut-off value of 0.6(<0.36 and>0.76).Self-critical AI4CRP’s diagnostic performances excluded low confidence characterizations.RESULTS AI4CRP use was feasible and performed on 30 patients with 51 colorectal polyps.Self-critical AI4CRP,excluding 14 low confidence characterizations[27.5%(14/51)],had a diagnostic accuracy of 89.2%,sensitivity of 89.7%,and specificity of 87.5%,which was higher compared to AI4CRP.CAD EYE had a 83.7%diagnostic accuracy,74.2%sensitivity,and 100.0%specificity.Diagnostic performances of the endoscopist alone(before AI)increased nonsignificantly after reviewing the CADx characterizations of both AI4CRP and CAD EYE(AI-assisted endoscopist).Diagnostic performances of the AI-assisted endoscopist were higher compared to both CADx-systems,except for specificity for which CAD EYE performed best.CONCLUSION Real-time use of AI4CRP was feasible.Objective confidence values provided by a CADx is novel and self-critical AI4CRP showed higher diagnostic performances compared to AI4CRP.展开更多
Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a nove...Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a novel approach for the design,analysis,management,control,and integration of CPSS,which can realize the causal analysis of complex systems by means of“algorithmization”of“counterfactuals”.However,because CPSS involve human and social factors(e.g.,autonomy,initiative,and sociality),it is difficult for traditional design of experiment(DOE)methods to achieve the generative explanation of system emergence.To address this challenge,this paper proposes an integrated approach to the design of computational experiments,incorporating three key modules:1)Descriptive module:Determining the influencing factors and response variables of the system by means of the modeling of an artificial society;2)Interpretative module:Selecting factorial experimental design solution to identify the relationship between influencing factors and macro phenomena;3)Predictive module:Building a meta-model that is equivalent to artificial society to explore its operating laws.Finally,a case study of crowd-sourcing platforms is presented to illustrate the application process and effectiveness of the proposed approach,which can reveal the social impact of algorithmic behavior on“rider race”.展开更多
Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The ...Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The graph structure is a typical tool used to formulate such correlations,it is incapable of modeling highorder correlations among different objects in systems;thus,the graph structure cannot fully convey the intricate correlations among objects.Confronted with the aforementioned two challenges,hypergraph computation models high-order correlations among data,knowledge,and rules through hyperedges and leverages these high-order correlations to enhance the data.Additionally,hypergraph computation achieves collaborative computation using data and high-order correlations,thereby offering greater modeling flexibility.In particular,we introduce three types of hypergraph computation methods:①hypergraph structure modeling,②hypergraph semantic computing,and③efficient hypergraph computing.We then specify how to adopt hypergraph computation in practice by focusing on specific tasks such as three-dimensional(3D)object recognition,revealing that hypergraph computation can reduce the data requirement by 80%while achieving comparable performance or improve the performance by 52%given the same data,compared with a traditional data-based method.A comprehensive overview of the applications of hypergraph computation in diverse domains,such as intelligent medicine and computer vision,is also provided.Finally,we introduce an open-source deep learning library,DeepHypergraph(DHG),which can serve as a tool for the practical usage of hypergraph computation.展开更多
The growing development of the Internet of Things(IoT)is accelerating the emergence and growth of new IoT services and applications,which will result in massive amounts of data being generated,transmitted and pro-cess...The growing development of the Internet of Things(IoT)is accelerating the emergence and growth of new IoT services and applications,which will result in massive amounts of data being generated,transmitted and pro-cessed in wireless communication networks.Mobile Edge Computing(MEC)is a desired paradigm to timely process the data from IoT for value maximization.In MEC,a number of computing-capable devices are deployed at the network edge near data sources to support edge computing,such that the long network transmission delay in cloud computing paradigm could be avoided.Since an edge device might not always have sufficient resources to process the massive amount of data,computation offloading is significantly important considering the coop-eration among edge devices.However,the dynamic traffic characteristics and heterogeneous computing capa-bilities of edge devices challenge the offloading.In addition,different scheduling schemes might provide different computation delays to the offloaded tasks.Thus,offloading in mobile nodes and scheduling in the MEC server are coupled to determine service delay.This paper seeks to guarantee low delay for computation intensive applica-tions by jointly optimizing the offloading and scheduling in such an MEC system.We propose a Delay-Greedy Computation Offloading(DGCO)algorithm to make offloading decisions for new tasks in distributed computing-enabled mobile devices.A Reinforcement Learning-based Parallel Scheduling(RLPS)algorithm is further designed to schedule offloaded tasks in the multi-core MEC server.With an offloading delay broadcast mechanism,the DGCO and RLPS cooperate to achieve the goal of delay-guarantee-ratio maximization.Finally,the simulation results show that our proposal can bound the end-to-end delay of various tasks.Even under slightly heavy task load,the delay-guarantee-ratio given by DGCO-RLPS can still approximate 95%,while that given by benchmarked algorithms is reduced to intolerable value.The simulation results are demonstrated the effective-ness of DGCO-RLPS for delay guarantee in MEC.展开更多
Dear Editor,This letter deals with the tracking problem for non-cooperative maneuvering targets based on the underwater sensor networks. Considering the acoustic intensity feature of underwater targets, a feature-aide...Dear Editor,This letter deals with the tracking problem for non-cooperative maneuvering targets based on the underwater sensor networks. Considering the acoustic intensity feature of underwater targets, a feature-aided multi-model tracking method for maneuvering targets is proposed.展开更多
Secure and efficient outsourced computation in cloud computing environments is crucial for ensuring data confidentiality, integrity, and resource optimization. In this research, we propose novel algorithms and methodo...Secure and efficient outsourced computation in cloud computing environments is crucial for ensuring data confidentiality, integrity, and resource optimization. In this research, we propose novel algorithms and methodologies to address these challenges. Through a series of experiments, we evaluate the performance, security, and efficiency of the proposed algorithms in real-world cloud environments. Our results demonstrate the effectiveness of homomorphic encryption-based secure computation, secure multiparty computation, and trusted execution environment-based approaches in mitigating security threats while ensuring efficient resource utilization. Specifically, our homomorphic encryption-based algorithm exhibits encryption times ranging from 20 to 1000 milliseconds and decryption times ranging from 25 to 1250 milliseconds for payload sizes varying from 100 KB to 5000 KB. Furthermore, our comparative analysis against state-of-the-art solutions reveals the strengths of our proposed algorithms in terms of security guarantees, encryption overhead, and communication latency.展开更多
This paper presents a comprehensive exploration into the integration of Internet of Things(IoT),big data analysis,cloud computing,and Artificial Intelligence(AI),which has led to an unprecedented era of connectivity.W...This paper presents a comprehensive exploration into the integration of Internet of Things(IoT),big data analysis,cloud computing,and Artificial Intelligence(AI),which has led to an unprecedented era of connectivity.We delve into the emerging trend of machine learning on embedded devices,enabling tasks in resource-limited environ-ments.However,the widespread adoption of machine learning raises significant privacy concerns,necessitating the development of privacy-preserving techniques.One such technique,secure multi-party computation(MPC),allows collaborative computations without exposing private inputs.Despite its potential,complex protocols and communication interactions hinder performance,especially on resource-constrained devices.Efforts to enhance efficiency have been made,but scalability remains a challenge.Given the success of GPUs in deep learning,lever-aging embedded GPUs,such as those offered by NVIDIA,emerges as a promising solution.Therefore,we propose an Embedded GPU-based Secure Two-party Computation(EG-STC)framework for Artificial Intelligence(AI)systems.To the best of our knowledge,this work represents the first endeavor to fully implement machine learning model training based on secure two-party computing on the Embedded GPU platform.Our experimental results demonstrate the effectiveness of EG-STC.On an embedded GPU with a power draw of 5 W,our implementation achieved a secure two-party matrix multiplication throughput of 5881.5 kilo-operations per millisecond(kops/ms),with an energy efficiency ratio of 1176.3 kops/ms/W.Furthermore,leveraging our EG-STC framework,we achieved an overall time acceleration ratio of 5–6 times compared to solutions running on server-grade CPUs.Our solution also exhibited a reduced runtime,requiring only 60%to 70%of the runtime of previously best-known methods on the same platform.In summary,our research contributes to the advancement of secure and efficient machine learning implementations on resource-constrained embedded devices,paving the way for broader adoption of AI technologies in various applications.展开更多
In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based ...In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based on the mMIMO under imperfect channel state information.Based on this,the SCE maximization problem is formulated by jointly optimizing the local computation frequency,the offloading time,the downloading time,the users and the base station transmit power.Due to its difficulty to directly solve the formulated problem,we first transform the fractional objective function into the subtractive form one via the dinkelbach method.Next,the original problem is transformed into a convex one by applying the successive convex approximation technique,and an iteration algorithm is proposed to obtain the solutions.Finally,the stimulations are conducted to show that the performance of the proposed schemes is superior to that of the other schemes.展开更多
Photocatalysis,a critical strategy for harvesting sunlight to address energy demand and environmental concerns,is underpinned by the discovery of high-performance photocatalysts,thereby how to design photocatalysts is...Photocatalysis,a critical strategy for harvesting sunlight to address energy demand and environmental concerns,is underpinned by the discovery of high-performance photocatalysts,thereby how to design photocatalysts is now generating widespread interest in boosting the conversion effi-ciency of solar energy.In the past decade,computational technologies and theoretical simulations have led to a major leap in the development of high-throughput computational screening strategies for novel high-efficiency photocatalysts.In this viewpoint,we started with introducing the challenges of photocatalysis from the view of experimental practice,especially the inefficiency of the traditional“trial and error”method.Sub-sequently,a cross-sectional comparison between experimental and high-throughput computational screening for photocatalysis is presented and discussed in detail.On the basis of the current experimental progress in photocatalysis,we also exemplified the various challenges associated with high-throughput computational screening strategies.Finally,we offered a preferred high-throughput computational screening procedure for pho-tocatalysts from an experimental practice perspective(model construction and screening,standardized experiments,assessment and revision),with the aim of a better correlation of high-throughput simulations and experimental practices,motivating to search for better descriptors.展开更多
Aptamers are a type of single-chain oligonucleotide that can combine with a specific target.Due to their simple preparation,easy modification,stable structure and reusability,aptamers have been widely applied as bioch...Aptamers are a type of single-chain oligonucleotide that can combine with a specific target.Due to their simple preparation,easy modification,stable structure and reusability,aptamers have been widely applied as biochemical sensors for medicine,food safety and environmental monitoring.However,there is little research on aptamer-target binding mechanisms,which limits their application and development.Computational simulation has gained much attention for revealing aptamer-target binding mechanisms at the atomic level.This work summarizes the main simulation methods used in the mechanistic analysis of aptamer-target complexes,the characteristics of binding between aptamers and different targets(metal ions,small organic molecules,biomacromolecules,cells,bacteria and viruses),the types of aptamer-target interactions and the factors influencing their strength.It provides a reference for further use of simulations in understanding aptamer-target binding mechanisms.展开更多
One-way quantum computation focuses on initially generating an entangled cluster state followed by a sequence of measurements with classical communication of their individual outcomes.Recently,a delayed-measurement ap...One-way quantum computation focuses on initially generating an entangled cluster state followed by a sequence of measurements with classical communication of their individual outcomes.Recently,a delayed-measurement approach has been applied to replace classical communication of individual measurement outcomes.In this work,by considering the delayed-measurement approach,we demonstrate a modified one-way CNOT gate using the on-cloud superconducting quantum computing platform:Quafu.The modified protocol for one-way quantum computing requires only three qubits rather than the four used in the standard protocol.Since this modified cluster state decreases the number of physical qubits required to implement one-way computation,both the scalability and complexity of the computing process are improved.Compared to previous work,this modified one-way CNOT gate is superior to the standard one in both fidelity and resource requirements.We have also numerically compared the behavior of standard and modified methods in large-scale one-way quantum computing.Our results suggest that in a noisy intermediate-scale quantum(NISQ)era,the modified method shows a significant advantage for one-way quantum computation.展开更多
Owing to the complex lithology of unconventional reservoirs,field interpreters usually need to provide a basis for interpretation using logging simulation models.Among the various detection tools that use nuclear sour...Owing to the complex lithology of unconventional reservoirs,field interpreters usually need to provide a basis for interpretation using logging simulation models.Among the various detection tools that use nuclear sources,the detector response can reflect various types of information of the medium.The Monte Carlo method is one of the primary methods used to obtain nuclear detection responses in complex environments.However,this requires a computational process with extensive random sampling,consumes considerable resources,and does not provide real-time response results.Therefore,a novel fast forward computational method(FFCM)for nuclear measurement that uses volumetric detection constraints to rapidly calculate the detector response in various complex environments is proposed.First,the data library required for the FFCM is built by collecting the detection volume,detector counts,and flux sensitivity functions through a Monte Carlo simulation.Then,based on perturbation theory and the Rytov approximation,a model for the detector response is derived using the flux sensitivity function method and a one-group diffusion model.The environmental perturbation is constrained to optimize the model according to the tool structure and the impact of the formation and borehole within the effective detection volume.Finally,the method is applied to a neutron porosity tool for verification.In various complex simulation environments,the maximum relative error between the calculated porosity results of Monte Carlo and FFCM was 6.80%,with a rootmean-square error of 0.62 p.u.In field well applications,the formation porosity model obtained using FFCM was in good agreement with the model obtained by interpreters,which demonstrates the validity and accuracy of the proposed method.展开更多
Although AI and quantum computing (QC) are fast emerging as key enablers of the future Internet, experts believe they pose an existential threat to humanity. Responding to the frenzied release of ChatGPT/GPT-4, thousa...Although AI and quantum computing (QC) are fast emerging as key enablers of the future Internet, experts believe they pose an existential threat to humanity. Responding to the frenzied release of ChatGPT/GPT-4, thousands of alarmed tech leaders recently signed an open letter to pause AI research to prepare for the catastrophic threats to humanity from uncontrolled AGI (Artificial General Intelligence). Perceived as an “epistemological nightmare”, AGI is believed to be on the anvil with GPT-5. Two computing rules appear responsible for these risks. 1) Mandatory third-party permissions that allow computers to run applications at the expense of introducing vulnerabilities. 2) The Halting Problem of Turing-complete AI programming languages potentially renders AGI unstoppable. The double whammy of these inherent weaknesses remains invincible under the legacy systems. A recent cybersecurity breakthrough shows that banning all permissions reduces the computer attack surface to zero, delivering a new zero vulnerability computing (ZVC) paradigm. Deploying ZVC and blockchain, this paper formulates and supports a hypothesis: “Safe, secure, ethical, controllable AGI/QC is possible by conquering the two unassailable rules of computability.” Pursued by a European consortium, testing/proving the proposed hypothesis will have a groundbreaking impact on the future digital infrastructure when AGI/QC starts powering the 75 billion internet devices by 2025.展开更多
This study developed a numerical model to efficiently treat solid waste magnesium nitrate hydrate through multi-step chemical reactions.The model simulates two-phase flow,heat,and mass transfer processes in a pyrolysi...This study developed a numerical model to efficiently treat solid waste magnesium nitrate hydrate through multi-step chemical reactions.The model simulates two-phase flow,heat,and mass transfer processes in a pyrolysis furnace to improve the decomposition rate of magnesium nitrate.The performance of multi-nozzle and single-nozzle injection methods was evaluated,and the effects of primary and secondary nozzle flow ratios,velocity ratios,and secondary nozzle inclination angles on the decomposition rate were investigated.Results indicate that multi-nozzle injection has a higher conversion efficiency and decomposition rate than single-nozzle injection,with a 10.3%higher conversion rate under the design parameters.The decomposition rate is primarily dependent on the average residence time of particles,which can be increased by decreasing flow rate and velocity ratios and increasing the inclination angle of secondary nozzles.The optimal parameters are injection flow ratio of 40%,injection velocity ratio of 0.6,and secondary nozzle inclination of 30°,corresponding to a maximum decomposition rate of 99.33%.展开更多
On the basis of computational fluid dynamics,the flow field characteristics of multi-trophic artificial reefs,including the flow field distribution features of a single reef under three different velocities and the ef...On the basis of computational fluid dynamics,the flow field characteristics of multi-trophic artificial reefs,including the flow field distribution features of a single reef under three different velocities and the effect of spacing between reefs on flow scale and the flow state,were analyzed.Results indicate upwelling,slow flow,and eddy around a single reef.Maximum velocity,height,and volume of upwelling in front of a single reef were positively correlated with inflow velocity.The length and volume of slow flow increased with the increase in inflow velocity.Eddies were present both inside and backward,and vorticity was positively correlated with inflow velocity.Space between reefs had a minor influence on the maximum velocity and height of upwelling.With the increase in space from 0.5 L to 1.5 L(L is the reef lehgth),the length of slow flow in the front and back of the combined reefs increased slightly.When the space was 2.0 L,the length of the slow flow decreased.In four different spaces,eddies were present inside and at the back of each reef.The maximum vorticity was negatively correlated with space from 0.5 L to 1.5 L,but under 2.0 L space,the maximum vorticity was close to the vorticity of a single reef under the same inflow velocity.展开更多
To support the explosive growth of Information and Communications Technology(ICT),Mobile Edge Comput-ing(MEC)provides users with low latency and high bandwidth service by offloading computational tasks to the network...To support the explosive growth of Information and Communications Technology(ICT),Mobile Edge Comput-ing(MEC)provides users with low latency and high bandwidth service by offloading computational tasks to the network’s edge.However,resource-constrained mobile devices still suffer from a capacity mismatch when faced with latency-sensitive and compute-intensive emerging applications.To address the difficulty of running computationally intensive applications on resource-constrained clients,a model of the computation offloading problem in a network consisting of multiple mobile users and edge cloud servers is studied in this paper.Then a user benefit function EoU(Experience of Users)is proposed jointly considering energy consumption and time delay.The EoU maximization problem is decomposed into two steps,i.e.,resource allocation and offloading decision.The offloading decision is usually given by heuristic algorithms which are often faced with the challenge of slow convergence and poor stability.Thus,a combined offloading algorithm,i.e.,a Gini coefficient-based adaptive genetic algorithm(GCAGA),is proposed to alleviate the dilemma.The proposed algorithm optimizes the offloading decision by maximizing EoU and accelerates the convergence with the Gini coefficient.The simulation compares the proposed algorithm with the genetic algorithm(GA)and adaptive genetic algorithm(AGA).Experiment results show that the Gini coefficient and the adaptive heuristic operators can accelerate the convergence speed,and the proposed algorithm performs better in terms of convergence while obtaining higher EoU.The simulation code of the proposed algorithm is available:https://github.com/Grox888/Mobile_Edge_Computing/tree/GCAGA.展开更多
For living anionic polymerization(LAP),solvent has a great influence on both reaction mechanism and kinetics.In this work,by using the classical butyl lithium-styrene polymerization as a model system,the effect of sol...For living anionic polymerization(LAP),solvent has a great influence on both reaction mechanism and kinetics.In this work,by using the classical butyl lithium-styrene polymerization as a model system,the effect of solvent on the mechanism and kinetics of LAP was revealed through a strategy combining density functional theory(DFT)calculations and kinetic modeling.In terms of mechanism,it is found that the stronger the solvent polarity,the more electrons transfer from initiator to solvent through detailed energy decomposition analysis of electrostatic interactions between initiator and solvent molecules.Furthermore,we also found that the stronger the solvent polarity,the higher the monomer initiation energy barrier and the smaller the initiation rate coefficient.Counterintuitively,initiation is more favorable at lower temperatures based on the calculated results ofΔG_(TS).Finally,the kinetic characteristics in different solvents were further examined by kinetic modeling.It is found that in benzene and n-pentane,the polymerization rate exhibits first-order kinetics.While,slow initiation and fast propagation were observed in tetrahydrofuran(THF)due to the slow free ion formation rate,leading to a deviation from first-order kinetics.展开更多
The digitization of administrative activities is a technique that not only optimizes resources, but also professionalizes the working methods of public and private services. This dematerialization process involves tec...The digitization of administrative activities is a technique that not only optimizes resources, but also professionalizes the working methods of public and private services. This dematerialization process involves technologies based on computer equipment, which, after use, becomes cumbersome waste. The aim targeted consisted of taking stock of the management of waste computer equipment imported into the Republic of Guinea, with a view of proposing a mode of environmentally sustainable management methods in a short time. To achieve this, the data was collected through investigation methods (observations, interviews, and questionnaires). This study reveals an excess of imports of electrical and electronic equipment in general, and computer equipment in particular, over the last ten years (2009-2019), With an import rate ranging from 4.03 to 54.45%. This study demonstrated the different ways in which computer and electronic equipment of all kinds are managed, with her failings. This study demonstrated the different ways in which computer and electronic equipment of all kinds are managed, as well as their failings. For this purpose, the different ways in which electronic waste is managed by different users were identified as storage, recycling, or rejection into nature or at waste storage points, often mixed with household waste. Companies specializing in the management of this type of waste and the presence of a certain number of regulatory texts almost do not exist. One company is only for the entire country but unknown to the majority of users.展开更多
基金the Key Project of Zhejiang Provincial Natural Science Foundation under Grants LD21F020001,Z20F020022the National Natural Science Foundation of China under Grants 62072340,62076185the Major Project of Wenzhou Natural Science Foundation under Grants 2021HZSY0071,ZS2022001.
文摘Various deep learning models have been proposed for the accurate assisted diagnosis of early-stage Alzheimer’s disease(AD).Most studies predominantly employ Convolutional Neural Networks(CNNs),which focus solely on local features,thus encountering difficulties in handling global features.In contrast to natural images,Structural Magnetic Resonance Imaging(sMRI)images exhibit a higher number of channel dimensions.However,during the Position Embedding stage ofMulti Head Self Attention(MHSA),the coded information related to the channel dimension is disregarded.To tackle these issues,we propose theRepBoTNet-CESA network,an advanced AD-aided diagnostic model that is capable of learning local and global features simultaneously.It combines the advantages of CNN networks in capturing local information and Transformer networks in integrating global information,reducing computational costs while achieving excellent classification performance.Moreover,it uses the Cubic Embedding Self Attention(CESA)proposed in this paper to incorporate the channel code information,enhancing the classification performance within the Transformer structure.Finally,the RepBoTNet-CESA performs well in various AD-aided diagnosis tasks,with an accuracy of 96.58%,precision of 97.26%,and recall of 96.23%in the AD/NC task;an accuracy of 92.75%,precision of 92.84%,and recall of 93.18%in the EMCI/NC task;and an accuracy of 80.97%,precision of 83.86%,and recall of 80.91%in the AD/EMCI/LMCI/NC task.This demonstrates that RepBoTNet-CESA delivers outstanding outcomes in various AD-aided diagnostic tasks.Furthermore,our study has shown that MHSA exhibits superior performance compared to conventional attention mechanisms in enhancing ResNet performance.Besides,the Deeper RepBoTNet-CESA network fails to make further progress in AD-aided diagnostic tasks.
基金2024 Education and Teaching Reform Research Project of Hainan Normal University(hsjg2024-04)。
文摘Introduction to Computer Science,as one of the fundamental courses in computer-related majors,plays an important role in the cultivation of computer professionals.However,traditional teaching models and content can no longer fully meet the needs of modern information technology development.In response to these issues,this article introduces the concept of computational creative thinking,optimizes course content,adopts exploratory teaching methods,and innovates course assessment methods,aiming to comprehensively enhance students’computational thinking and innovative abilities.By continuously improving and promoting this teaching model,it will undoubtedly promote computer education in universities to a new level.
文摘BACKGROUND Artificial intelligence(AI)has potential in the optical diagnosis of colorectal polyps.AIM To evaluate the feasibility of the real-time use of the computer-aided diagnosis system(CADx)AI for ColoRectal Polyps(AI4CRP)for the optical diagnosis of diminutive colorectal polyps and to compare the performance with CAD EYE^(TM)(Fujifilm,Tokyo,Japan).CADx influence on the optical diagnosis of an expert endoscopist was also investigated.METHODS AI4CRP was developed in-house and CAD EYE was proprietary software provided by Fujifilm.Both CADxsystems exploit convolutional neural networks.Colorectal polyps were characterized as benign or premalignant and histopathology was used as gold standard.AI4CRP provided an objective assessment of its characterization by presenting a calibrated confidence characterization value(range 0.0-1.0).A predefined cut-off value of 0.6 was set with values<0.6 indicating benign and values≥0.6 indicating premalignant colorectal polyps.Low confidence characterizations were defined as values 40%around the cut-off value of 0.6(<0.36 and>0.76).Self-critical AI4CRP’s diagnostic performances excluded low confidence characterizations.RESULTS AI4CRP use was feasible and performed on 30 patients with 51 colorectal polyps.Self-critical AI4CRP,excluding 14 low confidence characterizations[27.5%(14/51)],had a diagnostic accuracy of 89.2%,sensitivity of 89.7%,and specificity of 87.5%,which was higher compared to AI4CRP.CAD EYE had a 83.7%diagnostic accuracy,74.2%sensitivity,and 100.0%specificity.Diagnostic performances of the endoscopist alone(before AI)increased nonsignificantly after reviewing the CADx characterizations of both AI4CRP and CAD EYE(AI-assisted endoscopist).Diagnostic performances of the AI-assisted endoscopist were higher compared to both CADx-systems,except for specificity for which CAD EYE performed best.CONCLUSION Real-time use of AI4CRP was feasible.Objective confidence values provided by a CADx is novel and self-critical AI4CRP showed higher diagnostic performances compared to AI4CRP.
基金the National Key Research and Development Program of China(2021YFF0900800)the National Natural Science Foundation of China(61972276,62206116,62032016)+2 种基金the New Liberal Arts Reform and Practice Project of National Ministry of Education(2021170002)the Open Research Fund of the State Key Laboratory for Management and Control of Complex Systems(20210101)Tianjin University Talent Innovation Reward Program for Literature and Science Graduate Student(C1-2022-010)。
文摘Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a novel approach for the design,analysis,management,control,and integration of CPSS,which can realize the causal analysis of complex systems by means of“algorithmization”of“counterfactuals”.However,because CPSS involve human and social factors(e.g.,autonomy,initiative,and sociality),it is difficult for traditional design of experiment(DOE)methods to achieve the generative explanation of system emergence.To address this challenge,this paper proposes an integrated approach to the design of computational experiments,incorporating three key modules:1)Descriptive module:Determining the influencing factors and response variables of the system by means of the modeling of an artificial society;2)Interpretative module:Selecting factorial experimental design solution to identify the relationship between influencing factors and macro phenomena;3)Predictive module:Building a meta-model that is equivalent to artificial society to explore its operating laws.Finally,a case study of crowd-sourcing platforms is presented to illustrate the application process and effectiveness of the proposed approach,which can reveal the social impact of algorithmic behavior on“rider race”.
文摘Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The graph structure is a typical tool used to formulate such correlations,it is incapable of modeling highorder correlations among different objects in systems;thus,the graph structure cannot fully convey the intricate correlations among objects.Confronted with the aforementioned two challenges,hypergraph computation models high-order correlations among data,knowledge,and rules through hyperedges and leverages these high-order correlations to enhance the data.Additionally,hypergraph computation achieves collaborative computation using data and high-order correlations,thereby offering greater modeling flexibility.In particular,we introduce three types of hypergraph computation methods:①hypergraph structure modeling,②hypergraph semantic computing,and③efficient hypergraph computing.We then specify how to adopt hypergraph computation in practice by focusing on specific tasks such as three-dimensional(3D)object recognition,revealing that hypergraph computation can reduce the data requirement by 80%while achieving comparable performance or improve the performance by 52%given the same data,compared with a traditional data-based method.A comprehensive overview of the applications of hypergraph computation in diverse domains,such as intelligent medicine and computer vision,is also provided.Finally,we introduce an open-source deep learning library,DeepHypergraph(DHG),which can serve as a tool for the practical usage of hypergraph computation.
基金supported in part by the National Natural Science Foundation of China under Grant 61901128,62273109the Natural Science Foundation of the Jiangsu Higher Education Institutions of China(21KJB510032).
文摘The growing development of the Internet of Things(IoT)is accelerating the emergence and growth of new IoT services and applications,which will result in massive amounts of data being generated,transmitted and pro-cessed in wireless communication networks.Mobile Edge Computing(MEC)is a desired paradigm to timely process the data from IoT for value maximization.In MEC,a number of computing-capable devices are deployed at the network edge near data sources to support edge computing,such that the long network transmission delay in cloud computing paradigm could be avoided.Since an edge device might not always have sufficient resources to process the massive amount of data,computation offloading is significantly important considering the coop-eration among edge devices.However,the dynamic traffic characteristics and heterogeneous computing capa-bilities of edge devices challenge the offloading.In addition,different scheduling schemes might provide different computation delays to the offloaded tasks.Thus,offloading in mobile nodes and scheduling in the MEC server are coupled to determine service delay.This paper seeks to guarantee low delay for computation intensive applica-tions by jointly optimizing the offloading and scheduling in such an MEC system.We propose a Delay-Greedy Computation Offloading(DGCO)algorithm to make offloading decisions for new tasks in distributed computing-enabled mobile devices.A Reinforcement Learning-based Parallel Scheduling(RLPS)algorithm is further designed to schedule offloaded tasks in the multi-core MEC server.With an offloading delay broadcast mechanism,the DGCO and RLPS cooperate to achieve the goal of delay-guarantee-ratio maximization.Finally,the simulation results show that our proposal can bound the end-to-end delay of various tasks.Even under slightly heavy task load,the delay-guarantee-ratio given by DGCO-RLPS can still approximate 95%,while that given by benchmarked algorithms is reduced to intolerable value.The simulation results are demonstrated the effective-ness of DGCO-RLPS for delay guarantee in MEC.
基金supported by the National Natural Science Foundation of China (62173299, U1909206)the Zhejiang Provincial Natural Science Foundation of China (LZ23F030006)+1 种基金the Joint Fund of Ministry of Education for Pre-research of Equipment (8091B022147)the Fundamental Research Funds for the Central Universities (xtr072022001)。
文摘Dear Editor,This letter deals with the tracking problem for non-cooperative maneuvering targets based on the underwater sensor networks. Considering the acoustic intensity feature of underwater targets, a feature-aided multi-model tracking method for maneuvering targets is proposed.
文摘Secure and efficient outsourced computation in cloud computing environments is crucial for ensuring data confidentiality, integrity, and resource optimization. In this research, we propose novel algorithms and methodologies to address these challenges. Through a series of experiments, we evaluate the performance, security, and efficiency of the proposed algorithms in real-world cloud environments. Our results demonstrate the effectiveness of homomorphic encryption-based secure computation, secure multiparty computation, and trusted execution environment-based approaches in mitigating security threats while ensuring efficient resource utilization. Specifically, our homomorphic encryption-based algorithm exhibits encryption times ranging from 20 to 1000 milliseconds and decryption times ranging from 25 to 1250 milliseconds for payload sizes varying from 100 KB to 5000 KB. Furthermore, our comparative analysis against state-of-the-art solutions reveals the strengths of our proposed algorithms in terms of security guarantees, encryption overhead, and communication latency.
基金supported in part by Major Science and Technology Demonstration Project of Jiangsu Provincial Key R&D Program under Grant No.BE2023025in part by the National Natural Science Foundation of China under Grant No.62302238+2 种基金in part by the Natural Science Foundation of Jiangsu Province under Grant No.BK20220388in part by the Natural Science Research Project of Colleges and Universities in Jiangsu Province under Grant No.22KJB520004in part by the China Postdoctoral Science Foundation under Grant No.2022M711689.
文摘This paper presents a comprehensive exploration into the integration of Internet of Things(IoT),big data analysis,cloud computing,and Artificial Intelligence(AI),which has led to an unprecedented era of connectivity.We delve into the emerging trend of machine learning on embedded devices,enabling tasks in resource-limited environ-ments.However,the widespread adoption of machine learning raises significant privacy concerns,necessitating the development of privacy-preserving techniques.One such technique,secure multi-party computation(MPC),allows collaborative computations without exposing private inputs.Despite its potential,complex protocols and communication interactions hinder performance,especially on resource-constrained devices.Efforts to enhance efficiency have been made,but scalability remains a challenge.Given the success of GPUs in deep learning,lever-aging embedded GPUs,such as those offered by NVIDIA,emerges as a promising solution.Therefore,we propose an Embedded GPU-based Secure Two-party Computation(EG-STC)framework for Artificial Intelligence(AI)systems.To the best of our knowledge,this work represents the first endeavor to fully implement machine learning model training based on secure two-party computing on the Embedded GPU platform.Our experimental results demonstrate the effectiveness of EG-STC.On an embedded GPU with a power draw of 5 W,our implementation achieved a secure two-party matrix multiplication throughput of 5881.5 kilo-operations per millisecond(kops/ms),with an energy efficiency ratio of 1176.3 kops/ms/W.Furthermore,leveraging our EG-STC framework,we achieved an overall time acceleration ratio of 5–6 times compared to solutions running on server-grade CPUs.Our solution also exhibited a reduced runtime,requiring only 60%to 70%of the runtime of previously best-known methods on the same platform.In summary,our research contributes to the advancement of secure and efficient machine learning implementations on resource-constrained embedded devices,paving the way for broader adoption of AI technologies in various applications.
基金The Natural Science Foundation of Henan Province(No.232300421097)the Program for Science&Technology Innovation Talents in Universities of Henan Province(No.23HASTIT019,24HASTIT038)+2 种基金the China Postdoctoral Science Foundation(No.2023T160596,2023M733251)the Open Research Fund of National Mobile Communications Research Laboratory,Southeast University(No.2023D11)the Song Shan Laboratory Foundation(No.YYJC022022003)。
文摘In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based on the mMIMO under imperfect channel state information.Based on this,the SCE maximization problem is formulated by jointly optimizing the local computation frequency,the offloading time,the downloading time,the users and the base station transmit power.Due to its difficulty to directly solve the formulated problem,we first transform the fractional objective function into the subtractive form one via the dinkelbach method.Next,the original problem is transformed into a convex one by applying the successive convex approximation technique,and an iteration algorithm is proposed to obtain the solutions.Finally,the stimulations are conducted to show that the performance of the proposed schemes is superior to that of the other schemes.
基金The authors are grateful for financial support from the National Key Projects for Fundamental Research and Development of China(2021YFA1500803)the National Natural Science Foundation of China(51825205,52120105002,22102202,22088102,U22A20391)+1 种基金the DNL Cooperation Fund,CAS(DNL202016)the CAS Project for Young Scientists in Basic Research(YSBR-004).
文摘Photocatalysis,a critical strategy for harvesting sunlight to address energy demand and environmental concerns,is underpinned by the discovery of high-performance photocatalysts,thereby how to design photocatalysts is now generating widespread interest in boosting the conversion effi-ciency of solar energy.In the past decade,computational technologies and theoretical simulations have led to a major leap in the development of high-throughput computational screening strategies for novel high-efficiency photocatalysts.In this viewpoint,we started with introducing the challenges of photocatalysis from the view of experimental practice,especially the inefficiency of the traditional“trial and error”method.Sub-sequently,a cross-sectional comparison between experimental and high-throughput computational screening for photocatalysis is presented and discussed in detail.On the basis of the current experimental progress in photocatalysis,we also exemplified the various challenges associated with high-throughput computational screening strategies.Finally,we offered a preferred high-throughput computational screening procedure for pho-tocatalysts from an experimental practice perspective(model construction and screening,standardized experiments,assessment and revision),with the aim of a better correlation of high-throughput simulations and experimental practices,motivating to search for better descriptors.
文摘Aptamers are a type of single-chain oligonucleotide that can combine with a specific target.Due to their simple preparation,easy modification,stable structure and reusability,aptamers have been widely applied as biochemical sensors for medicine,food safety and environmental monitoring.However,there is little research on aptamer-target binding mechanisms,which limits their application and development.Computational simulation has gained much attention for revealing aptamer-target binding mechanisms at the atomic level.This work summarizes the main simulation methods used in the mechanistic analysis of aptamer-target complexes,the characteristics of binding between aptamers and different targets(metal ions,small organic molecules,biomacromolecules,cells,bacteria and viruses),the types of aptamer-target interactions and the factors influencing their strength.It provides a reference for further use of simulations in understanding aptamer-target binding mechanisms.
基金the valuable discussions.Project supported by the National Natural Science Foundation of China(Grant Nos.92265207 and T2121001)Beijing Natural Science Foundation(Grant No.Z200009).
文摘One-way quantum computation focuses on initially generating an entangled cluster state followed by a sequence of measurements with classical communication of their individual outcomes.Recently,a delayed-measurement approach has been applied to replace classical communication of individual measurement outcomes.In this work,by considering the delayed-measurement approach,we demonstrate a modified one-way CNOT gate using the on-cloud superconducting quantum computing platform:Quafu.The modified protocol for one-way quantum computing requires only three qubits rather than the four used in the standard protocol.Since this modified cluster state decreases the number of physical qubits required to implement one-way computation,both the scalability and complexity of the computing process are improved.Compared to previous work,this modified one-way CNOT gate is superior to the standard one in both fidelity and resource requirements.We have also numerically compared the behavior of standard and modified methods in large-scale one-way quantum computing.Our results suggest that in a noisy intermediate-scale quantum(NISQ)era,the modified method shows a significant advantage for one-way quantum computation.
基金This work is supported by National Natural Science Foundation of China(Nos.U23B20151 and 52171253).
文摘Owing to the complex lithology of unconventional reservoirs,field interpreters usually need to provide a basis for interpretation using logging simulation models.Among the various detection tools that use nuclear sources,the detector response can reflect various types of information of the medium.The Monte Carlo method is one of the primary methods used to obtain nuclear detection responses in complex environments.However,this requires a computational process with extensive random sampling,consumes considerable resources,and does not provide real-time response results.Therefore,a novel fast forward computational method(FFCM)for nuclear measurement that uses volumetric detection constraints to rapidly calculate the detector response in various complex environments is proposed.First,the data library required for the FFCM is built by collecting the detection volume,detector counts,and flux sensitivity functions through a Monte Carlo simulation.Then,based on perturbation theory and the Rytov approximation,a model for the detector response is derived using the flux sensitivity function method and a one-group diffusion model.The environmental perturbation is constrained to optimize the model according to the tool structure and the impact of the formation and borehole within the effective detection volume.Finally,the method is applied to a neutron porosity tool for verification.In various complex simulation environments,the maximum relative error between the calculated porosity results of Monte Carlo and FFCM was 6.80%,with a rootmean-square error of 0.62 p.u.In field well applications,the formation porosity model obtained using FFCM was in good agreement with the model obtained by interpreters,which demonstrates the validity and accuracy of the proposed method.
文摘Although AI and quantum computing (QC) are fast emerging as key enablers of the future Internet, experts believe they pose an existential threat to humanity. Responding to the frenzied release of ChatGPT/GPT-4, thousands of alarmed tech leaders recently signed an open letter to pause AI research to prepare for the catastrophic threats to humanity from uncontrolled AGI (Artificial General Intelligence). Perceived as an “epistemological nightmare”, AGI is believed to be on the anvil with GPT-5. Two computing rules appear responsible for these risks. 1) Mandatory third-party permissions that allow computers to run applications at the expense of introducing vulnerabilities. 2) The Halting Problem of Turing-complete AI programming languages potentially renders AGI unstoppable. The double whammy of these inherent weaknesses remains invincible under the legacy systems. A recent cybersecurity breakthrough shows that banning all permissions reduces the computer attack surface to zero, delivering a new zero vulnerability computing (ZVC) paradigm. Deploying ZVC and blockchain, this paper formulates and supports a hypothesis: “Safe, secure, ethical, controllable AGI/QC is possible by conquering the two unassailable rules of computability.” Pursued by a European consortium, testing/proving the proposed hypothesis will have a groundbreaking impact on the future digital infrastructure when AGI/QC starts powering the 75 billion internet devices by 2025.
基金the financial support for this work provided by the National Key R&D Program of China‘Technologies and Integrated Application of Magnesite Waste Utilization for High-Valued Chemicals and Materials’(2020YFC1909303)。
文摘This study developed a numerical model to efficiently treat solid waste magnesium nitrate hydrate through multi-step chemical reactions.The model simulates two-phase flow,heat,and mass transfer processes in a pyrolysis furnace to improve the decomposition rate of magnesium nitrate.The performance of multi-nozzle and single-nozzle injection methods was evaluated,and the effects of primary and secondary nozzle flow ratios,velocity ratios,and secondary nozzle inclination angles on the decomposition rate were investigated.Results indicate that multi-nozzle injection has a higher conversion efficiency and decomposition rate than single-nozzle injection,with a 10.3%higher conversion rate under the design parameters.The decomposition rate is primarily dependent on the average residence time of particles,which can be increased by decreasing flow rate and velocity ratios and increasing the inclination angle of secondary nozzles.The optimal parameters are injection flow ratio of 40%,injection velocity ratio of 0.6,and secondary nozzle inclination of 30°,corresponding to a maximum decomposition rate of 99.33%.
基金supported by the National Natural Science Foundation of China(No.32002442)the National Key R&D Program(No.2019YFD0902101).
文摘On the basis of computational fluid dynamics,the flow field characteristics of multi-trophic artificial reefs,including the flow field distribution features of a single reef under three different velocities and the effect of spacing between reefs on flow scale and the flow state,were analyzed.Results indicate upwelling,slow flow,and eddy around a single reef.Maximum velocity,height,and volume of upwelling in front of a single reef were positively correlated with inflow velocity.The length and volume of slow flow increased with the increase in inflow velocity.Eddies were present both inside and backward,and vorticity was positively correlated with inflow velocity.Space between reefs had a minor influence on the maximum velocity and height of upwelling.With the increase in space from 0.5 L to 1.5 L(L is the reef lehgth),the length of slow flow in the front and back of the combined reefs increased slightly.When the space was 2.0 L,the length of the slow flow decreased.In four different spaces,eddies were present inside and at the back of each reef.The maximum vorticity was negatively correlated with space from 0.5 L to 1.5 L,but under 2.0 L space,the maximum vorticity was close to the vorticity of a single reef under the same inflow velocity.
文摘To support the explosive growth of Information and Communications Technology(ICT),Mobile Edge Comput-ing(MEC)provides users with low latency and high bandwidth service by offloading computational tasks to the network’s edge.However,resource-constrained mobile devices still suffer from a capacity mismatch when faced with latency-sensitive and compute-intensive emerging applications.To address the difficulty of running computationally intensive applications on resource-constrained clients,a model of the computation offloading problem in a network consisting of multiple mobile users and edge cloud servers is studied in this paper.Then a user benefit function EoU(Experience of Users)is proposed jointly considering energy consumption and time delay.The EoU maximization problem is decomposed into two steps,i.e.,resource allocation and offloading decision.The offloading decision is usually given by heuristic algorithms which are often faced with the challenge of slow convergence and poor stability.Thus,a combined offloading algorithm,i.e.,a Gini coefficient-based adaptive genetic algorithm(GCAGA),is proposed to alleviate the dilemma.The proposed algorithm optimizes the offloading decision by maximizing EoU and accelerates the convergence with the Gini coefficient.The simulation compares the proposed algorithm with the genetic algorithm(GA)and adaptive genetic algorithm(AGA).Experiment results show that the Gini coefficient and the adaptive heuristic operators can accelerate the convergence speed,and the proposed algorithm performs better in terms of convergence while obtaining higher EoU.The simulation code of the proposed algorithm is available:https://github.com/Grox888/Mobile_Edge_Computing/tree/GCAGA.
基金financially supported by the National Natural Science Foundation of China(U21A20313,22222807)。
文摘For living anionic polymerization(LAP),solvent has a great influence on both reaction mechanism and kinetics.In this work,by using the classical butyl lithium-styrene polymerization as a model system,the effect of solvent on the mechanism and kinetics of LAP was revealed through a strategy combining density functional theory(DFT)calculations and kinetic modeling.In terms of mechanism,it is found that the stronger the solvent polarity,the more electrons transfer from initiator to solvent through detailed energy decomposition analysis of electrostatic interactions between initiator and solvent molecules.Furthermore,we also found that the stronger the solvent polarity,the higher the monomer initiation energy barrier and the smaller the initiation rate coefficient.Counterintuitively,initiation is more favorable at lower temperatures based on the calculated results ofΔG_(TS).Finally,the kinetic characteristics in different solvents were further examined by kinetic modeling.It is found that in benzene and n-pentane,the polymerization rate exhibits first-order kinetics.While,slow initiation and fast propagation were observed in tetrahydrofuran(THF)due to the slow free ion formation rate,leading to a deviation from first-order kinetics.
文摘The digitization of administrative activities is a technique that not only optimizes resources, but also professionalizes the working methods of public and private services. This dematerialization process involves technologies based on computer equipment, which, after use, becomes cumbersome waste. The aim targeted consisted of taking stock of the management of waste computer equipment imported into the Republic of Guinea, with a view of proposing a mode of environmentally sustainable management methods in a short time. To achieve this, the data was collected through investigation methods (observations, interviews, and questionnaires). This study reveals an excess of imports of electrical and electronic equipment in general, and computer equipment in particular, over the last ten years (2009-2019), With an import rate ranging from 4.03 to 54.45%. This study demonstrated the different ways in which computer and electronic equipment of all kinds are managed, with her failings. This study demonstrated the different ways in which computer and electronic equipment of all kinds are managed, as well as their failings. For this purpose, the different ways in which electronic waste is managed by different users were identified as storage, recycling, or rejection into nature or at waste storage points, often mixed with household waste. Companies specializing in the management of this type of waste and the presence of a certain number of regulatory texts almost do not exist. One company is only for the entire country but unknown to the majority of users.