Witness Encryption Indispensable for Resettable Zero-Knowledge Arguments
This research proves witness encryption is essential for highly secure, randomness-reusable zero-knowledge arguments, advancing practical privacy solutions.
Folding Schemes Enable Efficient Recursive Zero-Knowledge Arguments
A new cryptographic primitive, the folding scheme, dramatically reduces recursive proof overhead, unlocking practical incrementally verifiable computation.
ZKPoT: Zero-Knowledge Consensus for Private, Scalable Federated Learning
A novel Zero-Knowledge Proof of Training (ZKPoT) consensus mechanism validates federated learning contributions privately, enhancing scalability and security.
Quantum-Resistant Blockchain Secures Transactions with Novel Consensus and Privacy
A new blockchain framework integrates lattice-based cryptography, sharded Proof-of-Stake, and zero-knowledge proofs to deliver quantum-safe, scalable, and private cryptocurrency transactions.
Accelerating Zero-Knowledge Proofs: Optimal Prover Time, Distributed Generation
New ZKP systems drastically cut proof generation time and enable distributed computation, unlocking scalable privacy for blockchain and AI.
Zero-Knowledge Proofs Transform Blockchain Scalability and Privacy
Zero-Knowledge Proofs enable verifiable computation without data exposure, fundamentally transforming blockchain scalability and privacy for decentralized systems.
Ethereum Foundation Launches Privacy Roadmap and Restructures Scaling Explorations
The Ethereum Foundation's new privacy roadmap and PSE initiative establish foundational privacy primitives for a robust, censorship-resistant network.
Ethereum Foundation Launches Privacy Stewards to Advance On-Chain Confidentiality
The Ethereum Foundation's new Privacy Stewards initiative strategically integrates ZK-proofs and L2 solutions to fortify network confidentiality and interoperability.
Scalable Zero-Knowledge Proofs for Machine Learning Fairness
Researchers developed FAIRZK, a novel system that uses zero-knowledge proofs and new fairness bounds to efficiently verify machine learning model fairness without revealing sensitive data, enabling scalable and confidential algorithmic auditing.
