Posted in 2025
Flash Attention 2 Math Derivation
- 22 December 2025
This blog post is a detailed math derivation of well-known Flash Attention 2 (FA2), a memory-efficient, highly optimized and de facto kernel implementation [Dao, 2023, Dao et al., 2022, Shah et al., 2024] of scaled dot-product attention operation introduced by Transformer [Vaswani et al., 2023], which is re-implemented and further extended in Flex-Flash-Attention kernels of MagiAttention [Zewei and Yunpeng, 2025].
Support Learnable Attention Sink
- 17 November 2025
Large-Scaled Models assign significant attention to few tokens (such as the intial tokens in the sequence), even if they are not semantically important, which is known as attention sink [Xiao et al., 2024]. Researchers attribute this interesting phenomenon to the nature of \(softmax\), which requires attention scores of each query token to always sum up to \(1\) for all key tokens in the context, even when some query token does not strongly attend to any key token at all [Gu et al., 2025]. Therefore, during the training, we can deliberately add some learnable sink tokens to the key sequence for each query token to collect those unneeded attention scores to relax the ”sum-up-to-one” constraint, as a learnable version of \(\textit{off-by-one}\space softmax\) [Miller, 2024].
MagiAttention
- 21 April 2025
A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Mask Training