Natural Language Processing

About This Course

Natural Language Processing (NLP) is one of the main subfields of Artificial Intelligence (AI) which deals with understanding and generating human language (often in text form). Similarly to most other applied areas of AI, NLP has undergone a rapid change over the past decade with advancements in the area of machine (deep) learning. This course provides an overview of some of the most recent developments in the area of NLP, mostly with deep learning flavor.

Learning Objectives

The course mostly focuses on two areas: semantic representations and pre-trained language models. For the former, a brief introduction is provided to vector space models and word embeddings. For the latter (which constitutes more than 70% of the syllabus), we will delve into Transformer-based language models and cover various topics, such as BERT and its derivatives, other architecture types, model analysis, prompting, interpretation, and ethical considerations. This is accompanied with many practical sessions on PyTorch and the Transformers (HuggingFace) library and hands-on assignments.

Check the main website for more details: http://teias-courses.github.io/nlp00

Course weekly plan