Zero-Knowledge Proofs Secure Federated Learning Consensus
A novel Zero-Knowledge Proof of Training (ZKPoT) consensus mechanism enhances privacy and efficiency in blockchain-secured federated learning.
Scaling zkSNARKs through Application and Proof System Co-Design
This research introduces "silently verifiable proofs" and a co-design approach to drastically reduce communication costs for scalable, privacy-preserving analytics.
Code-Based Zero-Knowledge Proofs for Post-Quantum Cryptographic Resilience
This research pioneers novel zero-knowledge proof protocols, including HammR and CROSS, leveraging coding theory to secure digital signatures against emerging quantum threats.
HyperPlonk++: Scalable Collaborative zk-SNARK for Distributed Proof Delegation
This research unveils a new collaborative zero-knowledge SNARK, HyperPlonk++, enabling efficient, private proof generation across distributed low-resource servers.
Eliminating Latency in Blockchain Threshold Cryptosystems for Enhanced Consensus
This research eliminates latency overhead for tight threshold cryptosystems, enhancing BFT blockchain efficiency and formalizing unavoidable delays.
Efficient Secure Multi-Party Comparison without Data Slack
A novel protocol drastically improves secure multi-party computation efficiency by eliminating data "slack," enabling practical privacy-preserving applications.
Enhancing Bitcoin Functionality and Privacy with Zero-Knowledge Proofs
This research introduces novel zero-knowledge proof protocols to enable private proof-of-reserves and trustless light clients on Bitcoin, expanding its core capabilities.
Zero-Knowledge Machine Learning Survey Categorizes Foundational Concepts and Challenges
This paper provides the first comprehensive categorization of Zero-Knowledge Machine Learning (ZKML), offering a critical framework to advance privacy-preserving AI and model integrity.
Zero-Knowledge Proofs Secure Large Language Models with Verifiable Privacy
Zero-Knowledge Proofs enable Large Language Models to operate with provable privacy and integrity, fostering trust in AI systems without exposing sensitive data.
