Data Poisoning

Definition ∞ Data poisoning is a type of adversarial attack where malicious actors intentionally corrupt the training data used by machine learning models, leading to biased or incorrect outputs. In decentralized AI or data markets, this could compromise the integrity of shared datasets, affecting the reliability of predictions or decisions made by AI systems. Such attacks can degrade model performance and introduce vulnerabilities. They pose a threat to data-driven blockchain applications.
Context ∞ Data poisoning is a growing concern in discussions about the security and trustworthiness of AI models, especially as they integrate with blockchain for decentralized data sharing. A key debate involves developing robust defenses and verification mechanisms for data integrity in distributed environments. Future research focuses on cryptographic proofs and consensus mechanisms to detect and mitigate poisoned data before it impacts AI systems.