The distributed flexible job shop scheduling problem(DFJSP)has attracted great attention with the growth of the global manufacturing industry.General DFJSP research only considers machine constraints and ignores worke...The distributed flexible job shop scheduling problem(DFJSP)has attracted great attention with the growth of the global manufacturing industry.General DFJSP research only considers machine constraints and ignores worker constraints.As one critical factor of production,effective utilization of worker resources can increase productivity.Meanwhile,energy consumption is a growing concern due to the increasingly serious environmental issues.Therefore,the distributed flexible job shop scheduling problem with dual resource constraints(DFJSP-DRC)for minimizing makespan and total energy consumption is studied in this paper.To solve the problem,we present a multi-objective mathematical model for DFJSP-DRC and propose a Q-learning-based multi-objective grey wolf optimizer(Q-MOGWO).In Q-MOGWO,high-quality initial solutions are generated by a hybrid initialization strategy,and an improved active decoding strategy is designed to obtain the scheduling schemes.To further enhance the local search capability and expand the solution space,two wolf predation strategies and three critical factory neighborhood structures based on Q-learning are proposed.These strategies and structures enable Q-MOGWO to explore the solution space more efficiently and thus find better Pareto solutions.The effectiveness of Q-MOGWO in addressing DFJSP-DRC is verified through comparison with four algorithms using 45 instances.The results reveal that Q-MOGWO outperforms comparison algorithms in terms of solution quality.展开更多
With the popularization of multi-variety and small-batch production patterns,the flexible job shop scheduling problem(FJSSP)has been widely studied.The sharing of processing resources by multiple machines frequently o...With the popularization of multi-variety and small-batch production patterns,the flexible job shop scheduling problem(FJSSP)has been widely studied.The sharing of processing resources by multiple machines frequently occurs due to space constraints in a flexible shop,which results in resource preemption for processing workpieces.Resource preemption complicates the constraints of scheduling problems that are otherwise difficult to solve.In this paper,the flexible job shop scheduling problem under the process resource preemption scenario is modeled,and a two-layer rule scheduling algorithm based on deep reinforcement learning is proposed to achieve the goal of minimum scheduling time.The simulation experiments compare our scheduling algorithm with two traditional metaheuristic optimization algorithms among different processing resource distribution scenarios in static scheduling environment.The results suggest that the two-layer rule scheduling algorithm based on deep reinforcement learning is more effective than the meta-heuristic algorithm in the application of processing resource preemption scenarios.Ablation experiments,generalization,and dynamic experiments are performed to demonstrate the excellent performance of our method for FJSSP under resource preemption.展开更多
基金supported by the Natural Science Foundation of Anhui Province(Grant Number 2208085MG181)the Science Research Project of Higher Education Institutions in Anhui Province,Philosophy and Social Sciences(Grant Number 2023AH051063)the Open Fund of Key Laboratory of Anhui Higher Education Institutes(Grant Number CS2021-ZD01).
文摘The distributed flexible job shop scheduling problem(DFJSP)has attracted great attention with the growth of the global manufacturing industry.General DFJSP research only considers machine constraints and ignores worker constraints.As one critical factor of production,effective utilization of worker resources can increase productivity.Meanwhile,energy consumption is a growing concern due to the increasingly serious environmental issues.Therefore,the distributed flexible job shop scheduling problem with dual resource constraints(DFJSP-DRC)for minimizing makespan and total energy consumption is studied in this paper.To solve the problem,we present a multi-objective mathematical model for DFJSP-DRC and propose a Q-learning-based multi-objective grey wolf optimizer(Q-MOGWO).In Q-MOGWO,high-quality initial solutions are generated by a hybrid initialization strategy,and an improved active decoding strategy is designed to obtain the scheduling schemes.To further enhance the local search capability and expand the solution space,two wolf predation strategies and three critical factory neighborhood structures based on Q-learning are proposed.These strategies and structures enable Q-MOGWO to explore the solution space more efficiently and thus find better Pareto solutions.The effectiveness of Q-MOGWO in addressing DFJSP-DRC is verified through comparison with four algorithms using 45 instances.The results reveal that Q-MOGWO outperforms comparison algorithms in terms of solution quality.
文摘With the popularization of multi-variety and small-batch production patterns,the flexible job shop scheduling problem(FJSSP)has been widely studied.The sharing of processing resources by multiple machines frequently occurs due to space constraints in a flexible shop,which results in resource preemption for processing workpieces.Resource preemption complicates the constraints of scheduling problems that are otherwise difficult to solve.In this paper,the flexible job shop scheduling problem under the process resource preemption scenario is modeled,and a two-layer rule scheduling algorithm based on deep reinforcement learning is proposed to achieve the goal of minimum scheduling time.The simulation experiments compare our scheduling algorithm with two traditional metaheuristic optimization algorithms among different processing resource distribution scenarios in static scheduling environment.The results suggest that the two-layer rule scheduling algorithm based on deep reinforcement learning is more effective than the meta-heuristic algorithm in the application of processing resource preemption scenarios.Ablation experiments,generalization,and dynamic experiments are performed to demonstrate the excellent performance of our method for FJSSP under resource preemption.