Optimal Prover Time Succinct Zero-Knowledge Proofs Redefine Scalability
The Libra proof system achieves optimal linear prover time, solving the primary bottleneck of ZKPs to unlock practical, large-scale verifiable computation.
Quantum Rewinding Secures Succinct Arguments against Quantum Adversaries
A novel quantum rewinding strategy proves IOP-based succinct arguments secure in the post-quantum era, ensuring long-term cryptographic integrity.
Linear Prover Time Unlocks Scalable Zero-Knowledge Proof Generation
Orion achieves optimal linear prover time and polylogarithmic proof size, resolving the ZKP scalability bottleneck for complex on-chain computation.
Equifficient Polynomial Commitments Unlock Optimal SNARK Size and Speed
A new equifficient polynomial commitment primitive resolves the SNARK size-time trade-off, enabling the smallest proofs and fastest verifiable computation.
Black-Box Commit-and-Prove SNARKs Unlock Verifiable Computation Scaling
Artemis, a new black-box SNARK construction, modularly solves the commitment verification bottleneck, enabling practical, large-scale zero-knowledge machine learning.
Lattice-Based Arguments Achieve Succinct Post-Quantum Verification Using Homomorphic Commitments
This work delivers the first lattice-based argument with polylogarithmic verification time, resolving the trade-off between post-quantum security and SNARK succinctness.
Zero-Knowledge Proof of Training Secures Private Decentralized Machine Learning Consensus
Zero-Knowledge Proof of Training (ZKPoT) leverages zk-SNARKs to validate collaborative model performance privately, enabling scalable, secure decentralized AI.
Efficient Lattice Commitments Secure Post-Quantum Verifiable Computation
Greyhound introduces the first concretely efficient lattice-based polynomial commitment scheme, providing quantum-resistant security for all verifiable computation.
Folding Schemes Enable Efficient Recursive Zero-Knowledge Computation
Folding schemes fundamentally reduce recursive proof overhead, enabling ultra-efficient incrementally verifiable computation for long-running processes.
Zero-Knowledge Proof of Training Secures Decentralized Federated Learning Consensus
ZKPoT uses zk-SNARKs to cryptographically verify machine learning model accuracy on-chain, enabling private, efficient, and robust decentralized AI.
Incremental Vector Commitments Enable Practical Trustless AI Model Verification
We introduce Incremental Vector Commitments, a new primitive that decouples LLM size from ZK-proving cost, unlocking verifiable AI inference.
zk-SNARKs: Succinct Proofs for Verifiable, Private Computation
zk-SNARKs enable proving computational integrity and data privacy without revealing underlying information, revolutionizing secure and scalable decentralized systems.
