As AI as well as DL approaches rapidly advance, it is more important than ever to guarantee the safety and reliability of the implemented methods. In recent years, it has become common knowledge that DL algorithms are susceptible to security breaches when presented with hostile data. Even while people may not notice any wrongdoing, the DL models may act in unexpected ways due to the faked samples. Adversarial assaults have been successfully implemented in real-world, physical circumstances, adding more evidence to their viability. As a result, adversarial attack and response approaches have emerged as a focus of intense study in the fields of machine learning and cybersecurity in recent years. First, we layout the theoretical foundations, algorithmic mechanisms, and practical uses for adversarial attack approaches in this study. Then, we detail a few ongoing research projects on defensive tactics that span the field's vast frontier. Subsequently, we examine a number of outstanding issues and difficulties, with the intention of stimulating more study into this vital subj ect.
Defense mechanism and adversarical attacks A Review
Rajesh Singh,A. Gehlot,A. Joshi
Published 2022 in 2022 International Interdisciplinary Humanitarian Conference for Sustainability (IIHC)
ABSTRACT
PUBLICATION RECORD
- Publication year
2022
- Venue
2022 International Interdisciplinary Humanitarian Conference for Sustainability (IIHC)
- Publication date
2022-11-18
- Fields of study
Not labeled
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-40 of 40 references · Page 1 of 1
CITED BY
- No citing papers are available for this paper.
Showing 0-0 of 0 citing papers · Page 1 of 1