The Robogymnast is a triple link underactuated pendulum that mimics a human gymnast hanging from a horizontal bar.In this paper, two multi-objective optimization methods are developed using invasive weed optimization...The Robogymnast is a triple link underactuated pendulum that mimics a human gymnast hanging from a horizontal bar.In this paper, two multi-objective optimization methods are developed using invasive weed optimization(IWO). The first method is the weighted criteria method IWO(WCMIWO) and the second method is the fuzzy logic IWO hybrid(FLIWOH). The two optimization methods were used to investigate the optimum diagonal values for the Q matrix of the linear quadratic regulator(LQR) controller that can balance the Robogymnast in an upright configuration. Two LQR controllers were first developed using the parameters obtained from the two optimization methods. The same process was then repeated, but this time with disturbance applied to the Robogymnast states to develop another set of two LQR controllers. The response of the controllers was then tested in different scenarios using simulation and their performance evaluated. The results show that all four controllers are able to balance the Robogymnast with varying accuracies. It has also been observed that the controllers trained with disturbance achieve faster settling time.展开更多
基金Majlis Amanah Rakyat (MARA)German Malaysian Institute (GMI) for their sponsorship
文摘The Robogymnast is a triple link underactuated pendulum that mimics a human gymnast hanging from a horizontal bar.In this paper, two multi-objective optimization methods are developed using invasive weed optimization(IWO). The first method is the weighted criteria method IWO(WCMIWO) and the second method is the fuzzy logic IWO hybrid(FLIWOH). The two optimization methods were used to investigate the optimum diagonal values for the Q matrix of the linear quadratic regulator(LQR) controller that can balance the Robogymnast in an upright configuration. Two LQR controllers were first developed using the parameters obtained from the two optimization methods. The same process was then repeated, but this time with disturbance applied to the Robogymnast states to develop another set of two LQR controllers. The response of the controllers was then tested in different scenarios using simulation and their performance evaluated. The results show that all four controllers are able to balance the Robogymnast with varying accuracies. It has also been observed that the controllers trained with disturbance achieve faster settling time.