Ring Learning with Rounding Unlocks Efficient Post-Quantum Zero-Knowledge
A new ZKP of Knowledge based on the Ring Learning with Rounding assumption delivers post-quantum security with drastically reduced proof size and verification latency.
Black-Box Commit-and-Prove SNARKs Accelerate Verifiable Machine Learning Efficiency
Artemis introduces a black-box Commit-and-Prove SNARK architecture, radically cutting prover time by decoupling commitment checks from the core verifiable computation.
Lattice-Based SNARKs Achieve Practical Post-Quantum Proof Size Reduction
A new lattice-based zkSNARK construction reduces post-quantum proof size by $10.3times$, collapsing the massive overhead that hindered quantum-secure verifiable computation.
Sublinear Prover Space Unlocks Practical Zero-Knowledge Verifiable Computation
A novel cryptographic equivalence reframes ZKP generation as a Tree Evaluation problem, quadratically reducing prover memory for constrained devices.
Linear Prover Time Unlocks Practical Zero-Knowledge Proof Scalability
A new ZKP argument system achieves optimal linear prover time, dramatically lowering the cost barrier for large-scale verifiable computation.
0g Labs Launches Aristotle Mainnet Unlocking Scalable Decentralized AI Computation
The Aristotle Mainnet establishes a modular, high-throughput Layer-1, fundamentally shifting AI from centralized silos to an open, verifiable public good.
Relativistic Zero-Knowledge Proofs Achieve Unconditional Quantum-Resistant Security
Leveraging physics, this new ZKP primitive delivers unconditional security, decoupling trust from computational assumptions for quantum-resistant blockchain integrity.
Succinct State Proofs Decouple Verification from State Bloat
A novel polynomial commitment scheme enables constant-size cryptographic proofs of the entire blockchain state, resolving the critical state synchronization bottleneck and preserving decentralization.
Verifiable Fine-Tuning Secures Large Language Models with Zero-Knowledge Proofs
zkLoRA is a new framework that cryptographically verifies LLM fine-tuning correctness without revealing model weights, unlocking private, auditable AI.
