Securing the AI supply chain: Mitigating vulnerabilities in AI model development and deployment

Isabirye Edward Kezron

Published 2024 in World Journal of Advanced Research and Reviews

ABSTRACT

The rapid advancement and integration of Artificial Intelligence (AI) across critical sectors — including healthcare, finance, defense, and infrastructure — have exposed an often-overlooked risk: vulnerabilities within the AI supply chain. This research examines the security challenges and potential threats affecting AI model development and deployment, focusing on adversarial attacks, data poisoning, model theft, and compromised third-party components. By dissecting the AI supply chain into its core stages — data sourcing, model training, deployment, and maintenance — this study identifies key entry points for malicious actors. The paper proposes a multi-layered security framework combining blockchain-based data provenance, federated learning for decentralized model training, and zero-trust architecture to ensure secure deployment. Additionally, it explores how adversarial training, model watermarking, and real-time anomaly detection can mitigate risks without sacrificing model performance. Case studies of high-profile AI breaches are analyzed to demonstrate the consequences of unsecured pipelines, emphasizing the urgency of securing AI systems.

PUBLICATION RECORD

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.