Separable Homomorphic Commitment Achieves Constant Overhead for Verifiable Aggregation
The new Separable Homomorphic Commitment primitive reduces client-side overhead from logarithmic to constant time for verifiable, secure data aggregation.
Zero-Knowledge Proof of Training Secures Decentralized Federated Learning
ZKPoT consensus uses zk-SNARKs to verify machine learning contributions privately, resolving the privacy-verifiability trade-off for decentralized AI.
Zero-Knowledge Proof of Training Secures Decentralized Federated Learning Consensus
ZKPoT uses zk-SNARKs to verify decentralized model accuracy without revealing private data, solving the efficiency-privacy trade-off in federated learning.
Zero-Knowledge Proof of Training Secures Decentralized AI Consensus
A new Zero-Knowledge Proof of Training (ZKPoT) consensus mechanism leverages zk-SNARKs to cryptographically verify model performance, eliminating Proof-of-Stake centralization and preserving data privacy in decentralized machine learning.
Zero-Knowledge Proof of Training Secures Federated Learning Consensus
ZKPoT uses zk-SNARKs to verify model contributions privately, eliminating the trade-off between decentralized AI privacy and consensus efficiency.
Multi-Client Functional Encryption Secures Private Multi-Source Data Computation
A novel Multi-Client Functional Encryption scheme enables secure, privacy-preserving inner product computations over data from multiple independent sources.
Decentralized Federated Learning Framework Enhances IoT Privacy and Security
A novel framework integrates DABE, HE, SMPC, and blockchain to secure IoT federated learning, enabling privacy-preserving AI and verifiable data exchange.
