Syllabus Application
CS 555
Large Language Models: Theoretical Foundations and Practical Applications
Faculty
Faculty of Engineering and Natural Sciences
Semester
Spring 2025-2026
Course
CS 555 -
Large Language Models: Theoretical Foundations and Practical Applications
Time/Place
Time
Week Day
Place
Date
09:40-10:30
Wed
UC-G030
Feb 16-May 22, 2026
10:40-12:30
Thu
FASS-G062
Feb 16-May 22, 2026
Level of course
Masters
Course Credits
SU Credit:3, ECTS:10
Prerequisites
CS 512 and CS 512
Corequisites
-
Course Type
Lecture
Instructor(s) Information
İnanç Arın
- Email: inancarin@sabanciuniv.edu
Course Information
Catalog Course Description
This course is designed to address the growing demand for expertise in Large Language Models (LLMs), which have revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI). As LLMs continue to transform industries such as healthcare, finance, and customer service, there is an increasing need for professionals who can understand, deploy, and fine-tune these models. The course provides students with both theoretical foundations and hands-on experience in working with transformer architectures, semantic search, retrieval-augmented generation (RAG), and AI agents, preparing them for real-world applications of LLMs. In addition to theory, this course delves into practical techniques for deploying LLMs in real-world environments, including methods for local deployment, fine-tuning, and integrating knowledge graphs to improve performance. By the end of the course, students will be equipped to design, implement, and evaluate complex LLM-based systems such as personalized chatbots and AI agents working in collaborative environments. This course will also prepare students for cutting-edge research and practical challenges in NLP and AI-driven systems, making it a crucial stepping stone for future innovations in the field.
Course Learning Outcomes:
| 1. | Analyze the theoretical foundations of LLMs (transformers, attention, embeddings) to explain their strengths, limitations, and common failure modes in NLP tasks. |
|---|---|
| 2. | Fine-tune and deploy pre-trained models for domain-specific tasks, applying efficiency techniques (e.g., quantization, parameter-efficient fine-tuning) to meet privacy, cost, and latency constraints in local environments. |
| 3. | Design and implement retrieval-enhanced NLP systems by combining semantic search, RAG, and knowledge graphs to improve factual grounding and support structured reasoning. |
| 4. | Develop autonomous AI agents capable of multi-step reasoning, planning, and tool use/task delegation using modern agentic frameworks. |
| 5. | Construct, evaluate, and iterate end-to-end LLM applications using appropriate metrics and industry-standard benchmarks, demonstrating effective integration of external data sources to solve real-world problems. |