For the MVA Master Vision Apprentissage, Paris

Presentation

The aim of this course is to introduce Large Language Models, with particular emphasis on the Transformers architecture. The first half of the course covers all the essential notions for this architecture, and the objective is to code a usable Transformer in its entirety, having understood in depth and with mathematical tools the attention mechanisms. The second half of the course discusses some applications of LLMs in formal frameworks (code, proof).

Schedule

Year 2025: the course runs on Tuesdays 1pm-4pm between January and March.
  • 7 Jan: Attention mechanisms, GPT from scratch
  • 14 Jan: Efficient Transformer architectures
  • 21 Jan:
  • 28 Jan:
  • 4 Feb:
  • 11 Feb:
  • 4 Mar:
  • 11 Mar:
  • 25 Mar: Project presentations

Teachers