Skip to main content

Adversarial Robustness

Definition

Adversarial robustness refers to a system’s capacity to sustain accurate functionality despite malicious input. This property is critical for machine learning models and decentralized protocols, ensuring their continued reliability when subjected to deliberate, malformed inputs. It signifies the system’s resilience against data alterations or adversarial examples designed to elicit misbehavior or incorrect outputs.