Proof-of-Data Hybrid Consensus Secures Scalable Deterministic Finality
The Proof-of-Data protocol decouples asynchronous execution from BFT-based finality, delivering a hybrid model for scalable, deterministic consensus.
Set Byzantine Consensus Decentralizes Rollup Sequencers and Data Availability
Set Byzantine Consensus introduces a decentralized "arranger" for rollups, fundamentally solving the single-node sequencer bottleneck and enhancing censorship resistance.
Zero-Knowledge Proof of Training Secures Decentralized AI Consensus
A new Zero-Knowledge Proof of Training (ZKPoT) consensus mechanism leverages zk-SNARKs to cryptographically verify model performance, eliminating Proof-of-Stake centralization and preserving data privacy in decentralized machine learning.
Hybrid BFT Achieves Both Probabilistic Speed and Periodic Finality
Albatross combines speculative BFT's high throughput with Tendermint's periodic provable finality, resolving the performance-finality consensus trade-off.
Graded Broadcast Protocol Lowers Asynchronous BFT Latency Bypassing Agreement
The new Graded Broadcast primitive bypasses the costly agreement stage in Asynchronous BFT, fundamentally reducing consensus latency and enhancing throughput.
Non-Linear Stake Weighting Fundamentally Advances Proof-of-Stake Decentralization and Resilience
New non-linear stake weighting models, Square Root and Logarithmic, mathematically re-balance validator influence to secure PoS decentralization.
Compositional Formal Verification Secures Complex DAG Consensus Protocols
This framework modularizes DAG consensus proofs into reusable components, dramatically reducing verification effort and ensuring robust protocol safety.
Mechanism Design Enforces Truthful Consensus Using Staked Collateral
A novel revelation mechanism leverages staked assets to ensure validators' truthfulness, resolving consensus disputes by making block proposal honesty the unique subgame perfect equilibrium.
Zero-Knowledge Proof of Training Secures Federated Learning Consensus
ZKPoT uses zk-SNARKs to verify model contributions privately, eliminating the trade-off between decentralized AI privacy and consensus efficiency.
