I. Introduction
Recently, machine learning-based malware detection approaches have been extensively applied to cyberspace security and achieved excellent detection outcomes [1]–[3]. However, since machine learning models are designed at the early stage only considering the implementation of functions without considering their security issues, they have apparent vulnerabilities in defending against adversarial attacks [4]–[7]. There are not effective methods to defend against adversarial attacks, especially in the field of malware adversary [8]–[13]. Therefore, understanding the generation principle of adversarial samples will provide an essential theoretical basis and technical support for future adversarial defense research.