Optimizing ZK-SNARKs by Minimizing Expensive Cryptographic Group Elements
Polymath redesigns zk-SNARKs by shifting proof composition from $mathbb{G}_2$ to $mathbb{G}_1$ elements, significantly reducing practical proof size and on-chain cost.
zk-SNARKs Enable Trustless Universal Cross-Chain State Verification
The Zendoo protocol uses recursive zk-SNARKs to generate succinct, constant-size proofs of sidechain state, fundamentally securing decentralized interoperability.
Zero-Knowledge Proof of Training Secures Private Decentralized AI Consensus
ZKPoT, a novel zk-SNARK-based consensus, cryptographically validates decentralized AI model contributions, eliminating privacy risks and scaling efficiency.
Optimal Prover Complexity Unlocks Linear-Time Zero-Knowledge Proof Generation
This breakthrough achieves optimal $O(N)$ prover time for SNARKs, fundamentally solving the quasi-linear bottleneck and enabling practical, scalable verifiable computation.
Scalable Distributed Randomness via Insertion-Secure Accumulators
Research demonstrates a scalable distributed randomness beacon by enforcing verifiable inclusion of all entropy contributions using insertion-secure accumulators.
Sublinear-Space Provers Democratize Verifiable Computation and Privacy at Scale
A novel block-processing algorithm achieves square-root memory scaling for ZKPs, transforming verifiable computation from server-bound to device-feasible.
ZKPoT Secures Federated Learning Consensus with Private Model Validation
The Zero-Knowledge Proof of Training (ZKPoT) mechanism utilizes zk-SNARKs to cryptographically verify the integrity and performance of private machine learning models, resolving the privacy-efficiency trade-off in decentralized AI.
Zero-Knowledge Proof of Training Secures Federated Consensus
The Zero-Knowledge Proof of Training consensus mechanism uses zk-SNARKs to prove model performance without revealing private data, solving the privacy-utility conflict in decentralized computation.
Efficient Commit-and-Prove SNARKs for Practical Zero-Knowledge Machine Learning
Artemis introduces novel Commit-and-Prove SNARKs, drastically reducing commitment verification overhead in zkML to enable scalable, trustworthy AI applications.
