Transparent Constant-Size Zero-Knowledge Proofs Eliminate Trusted Setup
This breakthrough cryptographic primitive, based on Groups of Unknown Order, yields a truly succinct zk-SNARK without a trusted setup, unlocking scalable, trustless computation.
Transparent Polynomial Commitments Achieve Practical Constant-Size Proofs
New aggregation techniques slash transparent polynomial commitment proof size by 85%, enabling practical, trustless, constant-sized ZK-SNARKs.
Logical Unprovability Enables Perfectly Sound Transparent Zero-Knowledge Proofs
Leveraging Gödelian principles, this new cryptographic model achieves perfectly sound, non-interactive, transparent proofs, resolving the trusted setup dilemma.
Efficient Post-Quantum Polynomial Commitments Fortify Zero-Knowledge Scalability
Greyhound introduces the first concretely efficient lattice-based polynomial commitment scheme, unlocking post-quantum security for zk-SNARKs and blockchain scaling primitives.
Sublinear Transparent Commitment Scheme Unlocks Efficient Data Availability Sampling
A new transparent polynomial commitment scheme with sublinear proof size radically optimizes data availability for stateless clients, resolving a core rollup bottleneck.
Zero-Knowledge Commitment Secures Private Mechanism Design and Verifiable Incentives
Cryptographic proofs enable a party to commit to a hidden mechanism while verifiably guaranteeing its incentive properties, eliminating trusted mediators.
Recursive Sumchecks Enable Linear-Time Verifiable Computation Proving
The Goldwasser-Kalai-Rothblum protocol's linear-time prover complexity radically lowers proof generation costs, unlocking practical, high-throughput ZK-rollup scaling.
Equifficient Polynomial Commitments Enable Fastest, Smallest Zero-Knowledge SNARKs
New Equifficient Polynomial Commitments (EPCs) enforce polynomial basis consistency, yielding SNARKs with record-smallest proof size and fastest prover time.
Sublinear Vector Commitments Enable Constant-Time Verification for Scalable Systems
A new vector commitment scheme achieves constant verification time with logarithmic proof size, fundamentally enabling efficient stateless clients and scalable data availability.
Optimal Prover Time Unlocks Scalable Zero-Knowledge Verifiable Computation
A new zero-knowledge argument system achieves optimal linear prover time, fundamentally eliminating the computational bottleneck for verifiable execution of large programs.
Modular zkVM Architecture Achieves Thousandfold Verifiable Computation Throughput
Integrating a STARK prover with logarithmic derivative memory checking radically increases zkVM efficiency, unlocking verifiable computation for global financial systems.
Universal Vector Commitments Enable Efficient Proofs of Non-Membership and Data Integrity
Introducing Universal Vector Commitments, a new primitive that securely proves element non-membership, fundamentally enhancing stateless client and ZK-rollup data verification.
Lattice-Based Polynomial Commitments Achieve Post-Quantum Succinctness and Sublinear Verification
Greyhound is the first concretely efficient lattice-based polynomial commitment scheme, enabling post-quantum secure zero-knowledge proofs with sublinear verifier time.
Incremental Proofs Maintain Constant-Size Sequential Work for Continuous Verification
This new cryptographic primitive enables constant-size proofs for arbitrarily long sequential computations, fundamentally solving the accumulated overhead problem for VDFs.
Zero-Knowledge Proof of Training Secures Decentralized Federated Learning
The Zero-Knowledge Proof of Training (ZKPoT) primitive uses zk-SNARKs to validate model performance without revealing private data, enabling trustless, scalable decentralized AI.
Zero-Knowledge Proof of Training Secures Decentralized Federated Learning Consensus
ZKPoT consensus leverages zk-SNARKs to cryptographically prove model performance, resolving the privacy-efficiency conflict in decentralized machine learning.
Zero-Knowledge Proof of Training Secures Decentralized AI Consensus
ZKPoT consensus leverages zk-SNARKs to cryptographically verify model contribution accuracy without revealing sensitive training data, enabling trustless federated learning.
Generic Folding Scheme Enables Efficient Non-Uniform Verifiable Computation
Protostar introduces a generic folding scheme for special-sound protocols, drastically reducing recursive overhead for complex, non-uniform verifiable computation.
Fast Zero-Knowledge Proofs for Verifiable Machine Learning via Circuit Optimization
The Constraint-Reduced Polynomial Circuit (CRPC) dramatically lowers ZKP overhead for matrix operations, making private, verifiable AI practical.
Zero-Knowledge Proof of Training Secures Federated Learning Consensus
A novel Zero-Knowledge Proof of Training (ZKPoT) mechanism cryptographically enforces model contribution quality while preserving data privacy, fundamentally securing decentralized AI.
Horizontally Scalable zkSNARKs via Proof Aggregation Framework
This framework achieves horizontal zkSNARK scalability by distributing large computations for parallel proving, then aggregating results into a single succinct proof.
Optimizing ZK-SNARKs by Minimizing Expensive Cryptographic Group Elements
Polymath redesigns zk-SNARKs by shifting proof composition from mathbbG2 to mathbbG1 elements, significantly reducing practical proof size and on-chain cost.
Zero-Knowledge Proof of Training Secures Federated Consensus
Research introduces ZKPoT consensus, leveraging zk-SNARKs to cryptographically verify private model training contributions without data disclosure.
