Lattice-Based Zero-Knowledge Signatures Eliminate Cryptographic Trapdoors
A new post-quantum signature framework converts non-trapdoor zero-knowledge proofs into digital signatures, fundamentally enhancing long-term security assurances.
Optimizing ZK-SNARKs by Minimizing Expensive Cryptographic Group Elements
Polymath redesigns zk-SNARKs by shifting proof composition from $mathbb{G}_2$ to $mathbb{G}_1$ elements, significantly reducing practical proof size and on-chain cost.
Zero-Knowledge Proof of Training Secures Federated Learning Consensus
ZKPoT uses zk-SNARKs to verify model contributions privately, eliminating the trade-off between decentralized AI privacy and consensus efficiency.
Zero-Knowledge Proof of Training Secures Decentralized AI Consensus
A new Zero-Knowledge Proof of Training (ZKPoT) consensus mechanism leverages zk-SNARKs to cryptographically verify model performance, eliminating Proof-of-Stake centralization and preserving data privacy in decentralized machine learning.
Zero-Knowledge Proof of Training Secures Decentralized Federated Learning Consensus
ZKPoT uses zk-SNARKs to verify decentralized model accuracy without revealing private data, solving the efficiency-privacy trade-off in federated learning.
OR-Aggregation Secures Efficient Zero-Knowledge Set Membership Proofs
A novel OR-aggregation technique drastically reduces proof size and computation for set membership, enabling private, scalable data management in IoT.
Zero-Knowledge Proof of Training Secures Decentralized Federated Learning
ZKPoT consensus uses zk-SNARKs to verify machine learning contributions privately, resolving the privacy-verifiability trade-off for decentralized AI.
Incremental Proofs Maintain Constant-Size Sequential Work for Continuous Verification
This new cryptographic primitive enables constant-size proofs for arbitrarily long sequential computations, fundamentally solving the accumulated overhead problem for VDFs.
Zero-Knowledge Proof of Training Secures Private Decentralized Federated Learning
ZKPoT consensus verifiably proves model contribution quality via zk-SNARKs, fundamentally securing private, scalable decentralized AI.
