The rapid advancement and integration of Artificial Intelligence (AI) across critical sectors — including healthcare, finance, defense, and infrastructure — have exposed an often-overlooked risk: vulnerabilities within the AI supply chain. This research examines the security challenges and potential threats affecting AI model development and deployment, focusing on adversarial attacks, data poisoning, model theft, and compromised third-party components. By dissecting the AI supply chain into its core stages — data sourcing, model training, deployment, and maintenance — this study identifies key entry points for malicious actors. The paper proposes a multi-layered security framework combining blockchain-based data provenance, federated learning for decentralized model training, and zero-trust architecture to ensure secure deployment. Additionally, it explores how adversarial training, model watermarking, and real-time anomaly detection can mitigate risks without sacrificing model performance. Case studies of high-profile AI breaches are analyzed to demonstrate the consequences of unsecured pipelines, emphasizing the urgency of securing AI systems.
Securing the AI supply chain: Mitigating vulnerabilities in AI model development and deployment
Published 2024 in World Journal of Advanced Research and Reviews
ABSTRACT
PUBLICATION RECORD
- Publication year
2024
- Venue
World Journal of Advanced Research and Reviews
- Publication date
2024-05-30
- Fields of study
Not labeled
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-3 of 3 references · Page 1 of 1
CITED BY
Showing 1-6 of 6 citing papers · Page 1 of 1