Evolving Nullifiers and Oblivious Synchronization Achieve Scalable Private Payments
The new Oblivious Synchronization model enables validators to prune the linearly growing nullifier set, resolving the core scaling bottleneck for private transaction protocols.
Zero-Knowledge Proof of Training Secures Federated Learning Consensus and Privacy
The ZKPoT mechanism cryptographically validates model contributions using zk-SNARKs, resolving the critical trade-off between consensus efficiency and data privacy.
Zero-Knowledge Proofs Verifiably Secure Large Language Model Inference
A novel ZKP system, zkLLM, enables the efficient, private verification of 13-billion-parameter LLM outputs, securing AI integrity and intellectual property.
ZK Proof of Training Secures Private Federated Learning Consensus
ZKPoT uses zk-SNARKs to verify model contributions without revealing data, solving the privacy-efficiency trade-off for decentralized AI.
Collaborative zk-SNARKs Enable Private, Decentralized, Scalable Proof Generation
Scalable collaborative zk-SNARKs use MPC to secret-share the witness, simultaneously achieving privacy and $24times$ faster proof outsourcing.
Selective Batched IBE Enables Constant-Cost Threshold Key Issuance
This new cryptographic primitive enables distributed authorities to generate a single, succinct decryption key for an arbitrary batch of identities at a cost independent of the batch size, fundamentally solving key management scalability in threshold systems.
Constant-Cost Batch Verification for Private Computation over Secret-Shared Data
New silently verifiable proofs achieve constant-size verifier communication for batch ZKPs over secret shares, unlocking scalable private computation.
Zero-Knowledge Proof of Training Secures Private Federated Learning Consensus
ZKPoT consensus validates machine learning contributions privately using zk-SNARKs, balancing efficiency, security, and data privacy for decentralized AI.
Zero-Knowledge Proof of Training Secures Private Decentralized Federated Learning
ZKPoT consensus verifiably proves model contribution quality via zk-SNARKs, fundamentally securing private, scalable decentralized AI.
