Recursive Sumchecks Enable Linear-Time Verifiable Computation Proving
The Goldwasser-Kalai-Rothblum protocol's linear-time prover complexity radically lowers proof generation costs, unlocking practical, high-throughput ZK-rollup scaling.
Equifficient Polynomial Commitments Enable Fastest, Smallest Zero-Knowledge SNARKs
New Equifficient Polynomial Commitments (EPCs) enforce polynomial basis consistency, yielding SNARKs with record-smallest proof size and fastest prover time.
Sublinear Vector Commitments Enable Constant-Time Verification for Scalable Systems
A new vector commitment scheme achieves constant verification time with logarithmic proof size, fundamentally enabling efficient stateless clients and scalable data availability.
Optimal Prover Time Unlocks Scalable Zero-Knowledge Verifiable Computation
A new zero-knowledge argument system achieves optimal linear prover time, fundamentally eliminating the computational bottleneck for verifiable execution of large programs.
Modular zkVM Architecture Achieves Thousandfold Verifiable Computation Throughput
Integrating a STARK prover with logarithmic derivative memory checking radically increases zkVM efficiency, unlocking verifiable computation for global financial systems.
Universal Vector Commitments Enable Efficient Proofs of Non-Membership and Data Integrity
Introducing Universal Vector Commitments, a new primitive that securely proves element non-membership, fundamentally enhancing stateless client and ZK-rollup data verification.
Lattice-Based Polynomial Commitments Achieve Post-Quantum Succinctness and Sublinear Verification
Greyhound is the first concretely efficient lattice-based polynomial commitment scheme, enabling post-quantum secure zero-knowledge proofs with sublinear verifier time.
Zero-Knowledge Proof of Training Secures Private Federated Learning Consensus
ZKPoT consensus validates machine learning contributions privately using zk-SNARKs, balancing efficiency, security, and data privacy for decentralized AI.
Zero-Knowledge Proof of Training Secures Private Decentralized Federated Learning
ZKPoT consensus verifiably proves model contribution quality via zk-SNARKs, fundamentally securing private, scalable decentralized AI.
