Dates: Five Sessions from April 9th to May 9th 2024
Venue: ICMAT-CSIC (https://www.icmat.es/es/como-llegar/)
Venue: CUNEF University (Aula F1.2. Campus Leonardo Prieto Castro. C. de Leonardo Prieto Castro, 2, Moncloa – Aravaca, 28040 Madrid)
Online site. TBA.
Working language: English.
Limited number of positions available.
Deadline to sign up: 1st April, 2024
Form to Register
Organizers: Roi Naveiro (CUNEF University) and David Ríos Insua (ICMAT-CSIC – AIHUB)
LLMs have marked much of the recent hype about AI, even leading to last minute modifications of the recent EU AI Act. We pursue two objectives with this activity:
— Short term: To provide a forum to facilitate understanding on latest developments in LLMs.
— Medium term: To explore the possibility of creating a working group checking for relevant novel areas in LLMs, most likely in Bayesian issues in relation to LLMs and/or security issues in LLMs.
Should you be interested in at least one of the above objectives, please join us. We shall include you in a group reading list.
Five sessions alternating between ICMAT and CUNEF sites led by a facilitator, but we expect involvement and discussion from the audience. The course start from introductory and supporting concepts and touch upon the most recent issues. Below is the workplan for the different session. Papers suggested to be read will be updated as we go along. Besides key papers we shall provide some computational examples to facilitate grasping concepts.
Should you have any questions please contact david.rios@icmat.es or roi.naveiro@cunef.edu
● Purpose: Provide basic background info to follow the forthcoming
● Reading suggestions: “Current Advances in Neural Networks” by Gallego and Ríos-Insua provide a broad overview of key concepts and issues in relation to NNs (https://www.annualreviews.org/doi/pdf/10.1146/annurev-statistics-040220-112019)». An overview of sequence modeling techniques and an empirical comparison of different algorithms is in “Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling» by Chung et al. (https://arxiv.org/abs/1412.3555).
● Date: April 9th at 11.30 @ICMAT-CSIC
● Facilitator: David Ríos Insua (ICMAT-CSIC)
● Discussion Topics: Introduction to sequence modelling, overview of neural networks, and the role of recurrent neural networks (RNNs) in handling sequential data.
● Purpose: Transformers provide the basis to LLMs. Understand their key ideas.
● Reading suggestions: «Formal Algorithms for Transformers» by Phuong et. al. (https://arxiv.org/abs/2207.09238). A self-contained and mathematically precise overview of the transformers architecture. «Attention Is All You Need» by Vaswani et al. (https://arxiv.org/abs/1706.03762), the pioneering paper in the field.
● Date: April 16th at 11.30. @CUNEF
● Facilitator: Roi Naveiro (CUNEF Universidad)
● Discussion Topics: Implement or explore a basic transformer model for a text classification task, focusing on the self-attention mechanism. A deep dive into the algorithms that drive transformer models, including attention mechanisms and positional encoding.
● Purpose: Get acquainted with key concepts underlying BERT and GPT as key foundation models for LLMs.
● Reading suggestions: “Language Models are Few-Shot Learners” by Brown et. al. (https://arxiv.org/abs/2005.14165)
● Date: April 23rd at 11.30. @ICMAT
● Facilitator: Carlos García Meixide (ICMAT-CSIC)
● Discussion Topics: Core architectures in modern LLMs
● Purpose: Get acquainted with Reinforcement Learning From Human Feedback (RLHF) and Reinforcement Learning From AI Feedback (RLAIF) as methods to tune foundation models to specific purposes.
● Reading suggestions: «Fine-Tuning Language Models from Human Preferences» by D. Ziegler et al. (https://arxiv.org/abs/1909.08593), to understand the RLHF approach in fine-tuning language models.
● Date: May 7th at 11.30. @CUNEF
● Facilitator: Victor Gallego (Komorebi AI)
● Discussion Topics: Principles of RLHF, its application in training models. Maybe introduce RLAIF.
● Purpose: Promote discussions about potential research topics around two issues: a) Bayes for LLMs and LLMs for Bayes, b) Security of LLMs
● Suggested readings: Explore aspects on Bayesian LLMs, Bayesian Transformers, Causality and LLMs, Adversarial Attacks to LLMs. Some relevant papers:
○ «Can Transformers do Bayesian Inference» (https://arxiv.org/abs/2112.10510)
○ On the Dangers of Stochastic Parrots… (https://dl.acm.org/doi/10.1145/3442188.3445922)
● Date: May 9th at 11.30 @IFT.
● Facilitator: David Ríos Insua (ICMAT-CSIC)
● Discussion Topics: The potential of transformers in Bayesian inference, adversarial attacks to LLMs, Causality and LLMs, etc.