LAM
Home Blog HTML Projects Tools Games Notes Resume Links
Home Blog HTML Projects Tools Games Notes Resume Links
Back to Blog

Multi-Head Latent Attention and Mixture of Experts (MoE) in Transformers

Jan 27, 2025 AI
MOE Transformers
Previous Microfluidics: Blood Diagnostics with Portable Point-of-Care Devices and Cloud-Based AI Next Integrated Information Theory in Non-Biological Systems: Exploring the Frontier of AI Consciousness