# Selected ML Papers

A curated collection of foundational machine learning papers.
Mirror of arXiv preprints for offline reading.

## Contents

| Paper | Year | Why it matters |
|-------|------|----------------|
| Attention Is All You Need | 2017 | Transformer architecture |
| ResNet | 2015 | Deep networks become trainable |
| BERT | 2018 | Pretrained language understanding |
| GPT-3 | 2020 | Few-shot learning emerges |
| LLaMA | 2023 | Open-weight foundation models |
| LLaMA 2 | 2023 | Open commercial-friendly LLMs |
| Mamba | 2024 | State space models challenge attention |

## Source
All papers from arXiv.org or original conference proceedings.

## License
Refer to original publications.
