The license plate recognition system(LPRS)has been widely adopted in daily life due to its efficiency and high accuracy.Deep neural networks are commonly used in the LPRS to improve the recognition accuracy.However,re...The license plate recognition system(LPRS)has been widely adopted in daily life due to its efficiency and high accuracy.Deep neural networks are commonly used in the LPRS to improve the recognition accuracy.However,researchers have found that deep neural networks have their own security problems that may lead to unexpected results.Specifically,they can be easily attacked by the adversarial examples that are generated by adding small perturbations to the original images,resulting in incorrect license plate recognition.There are some classic methods to generate adversarial examples,but they cannot be adopted on LPRS directly.In this paper,we modify some classic methods to generate adversarial examples that could mislead the LPRS.We conduct extensive evaluations on the HyperLPR system and the results show that the system could be easily attacked by such adversarial examples.In addition,we show that the generated images could also attack the black-box systems;we show some examples that the Baidu LPR system also makes incorrect recognitions.We hope this paper could help improve the LPRS by realizing the existence of such adversarial attacks.展开更多
基金This work is supported by the National Natural Science Foundation of China under Grant Nos.U1636215,61902082the Guangdong Key R&D Program of China 2019B010136003National Key R&D Program of China 2019YFB1706003.
文摘The license plate recognition system(LPRS)has been widely adopted in daily life due to its efficiency and high accuracy.Deep neural networks are commonly used in the LPRS to improve the recognition accuracy.However,researchers have found that deep neural networks have their own security problems that may lead to unexpected results.Specifically,they can be easily attacked by the adversarial examples that are generated by adding small perturbations to the original images,resulting in incorrect license plate recognition.There are some classic methods to generate adversarial examples,but they cannot be adopted on LPRS directly.In this paper,we modify some classic methods to generate adversarial examples that could mislead the LPRS.We conduct extensive evaluations on the HyperLPR system and the results show that the system could be easily attacked by such adversarial examples.In addition,we show that the generated images could also attack the black-box systems;we show some examples that the Baidu LPR system also makes incorrect recognitions.We hope this paper could help improve the LPRS by realizing the existence of such adversarial attacks.