Asynchronous Partial Vector Agreement Enables Constant-Round Error-Free Byzantine Consensus
Introducing Asynchronous Partial Vector Agreement, a new primitive that enables information-theoretically secure Byzantine consensus with optimal constant-time round complexity.
Erasure Code Commitments Enable Efficient Trustless Data Availability Sampling
This new cryptographic primitive formally guarantees committed data is a valid code word, enabling poly-logarithmic Data Availability Sampling without a trusted setup.
Layered Architecture Resolves Trilemma via Storage-Backed Consensus and Sharding
CrustChain integrates reputation-weighted Proof-of-Capacity with erasure coding and sharding, achieving high throughput and durability by decoupling storage.
Automated Formal Analysis Secures DeFi Oracle Input Vulnerabilities
OVer, a formal verification framework, uses SMT solvers to automatically identify and guard against oracle manipulation, securing DeFi protocols against skewed data.
Batch Mechanism Design Achieves Provable MEV Resilience for Automated Market Makers
This novel batch-clearing AMM mechanism provides provable arbitrage resilience, shifting MEV mitigation from consensus to the application layer.
Time-Exact Multi-Blockchains Ensure Predictable Decentralized Multi-Agent Systems
Leverages polynomial complexity and hierarchical architecture to guarantee predictable, time-exact transaction finality, enabling trustworthy AI coordination.
Differential Privacy Guarantees Fair Transaction Ordering in State Machine Replication
Linking Differential Privacy to SMR's equal opportunity property eliminates algorithmic bias, enabling cryptographically fair, MEV-resistant ordering protocols.
Characterizing ZKP GPU Bottlenecks Accelerates Verifiable Computation Scaling
ZKProphet empirically identifies Number-Theoretic Transform as the 90% GPU bottleneck, shifting optimization focus to unlock practical ZKP scaling.
ZKPoT: Private Consensus Verifies Decentralized Machine Learning
ZKPoT consensus leverages zk-SNARKs to cryptographically verify machine learning model contributions without revealing private training data or parameters.
