The Transformer Family Version 2.0
Highlights
- βΈWeng's 2023 update is roughly twice the length of her 2020 original β a comprehensive refactoring that restructures the hierarchy of Transformer variants rather than appending new papers
- βΈThe post derives the scaled dot-product attention formula attn(Q,K,V) = softmax(QK^T / sqrt(d_k)) * V and establishes notation (d, h, L, N) used consistently across all variants
- βΈSelf-attention is fundamentally a permutation-invariant set operation β understanding this property explains why encoder-only (BERT) and decoder-only (GPT) simplifications work for bidirectional context vs autoregressive generation
Original excerpt
Lil'Log | Posts Archive Search Tags FAQ The Transformer Family Version 2.0 Date: January 27, 2023 | Estimated Reading Time: 45 min | Author: Lilian Weng Table of Contents
Many new Transformer architecture improvements have been proposed since my last post on βThe Transformer Familyβ about three years ago. Here I did a big refactoring and enrichment of that 2020 post β restructure the hierarchy of sections and improve many sections with more recent papers. Version 2.0 is a superset of the old version, about twice the length.
π β π πΏ Γ π The input sequence where each element has been mapped into an embedding vector of shape π , same as the model size.
π π£ β π π Γ π π£ The value weightβ¦
10 more articles in this vault.
Import the full Lilian Weng vault to Burn 451 and build your own knowledge base.
Content attributed to the original author (Paul Graham). Burn 451 curates publicly available writing as a reading index. For removal requests, contact @hawking520.