상세 컨텐츠

본문 제목

Parameter efficient fine-tuning (PEFT)

Generative AI with Large Language Models

by Taeyoon.Kim.DS 2023. 8. 23. 00:21

본문

https://www.coursera.org/learn/generative-ai-with-llms/lecture/rCE9r/parameter-efficient-fine-tuning-peft

 

Parameter efficient fine-tuning (PEFT) - Week 2 | Coursera

Video created by deeplearning.ai, Amazon Web Services for the course "Generative AI with Large Language Models". Fine-tuning and evaluating large language models

www.coursera.org

 

Training large language models (LLMs) is computationally intensive, especially full fine-tuning, which requires significant memory for model weights, optimizer states, gradients, activations, and more. This can quickly exceed consumer hardware capabilities. Parameter-efficient fine-tuning (PEFT) is an alternative where only a small subset of parameters is updated. PEFT keeps most LLM weights frozen, reducing memory requirements significantly (sometimes just 15-20% of the original LLM's parameters). PEFT can often be done on a single GPU and is less susceptible to catastrophic forgetting.

PEFT allows efficient adaptation of the LLM to multiple tasks by training a small number of weights, resulting in a much smaller overall model size. Different PEFT methods exist, categorized into selective, reparameterization, and additive methods. Selective methods fine-tune specific LLM components but have trade-offs. Reparameterization methods, like LoRA, create low-rank transformations of LLM weights. Additive methods introduce new trainable components, such as adapter layers or soft prompt modifications.

This lesson explores prompt tuning, a specific soft prompt technique, and upcoming content discusses LoRA's memory reduction capabilities.

 

대형 언어 모델 (LLM)을 훈련하는 것은 계산적으로 매우 비용이 드는 작업입니다. 특히, 전체 미세 조정은 모델 가중치, 옵티마이저 상태, 그레이디언트, 활성화 등의 메모리를 필요로 하며, 이는 소비자용 하드웨어의 성능 한계를 빨리 초과할 수 있습니다. 매개 변수 효율적 미세 조정 (PEFT)은 업데이트해야 하는 매개 변수 하위 집합만 있으면 되는 대안입니다. PEFT는 대부분의 LLM 가중치를 고정시켜 메모리 요구 사항을 크게 줄입니다 (원래 LLM 매개 변수의 15-20%일 수도 있음). PEFT는 종종 하나의 GPU에서 수행할 수 있으며 전체 미세 조정의 치명적인 잊어버리기 문제에 민감하지 않습니다.

PEFT는 소량의 가중치를 훈련하여 전체 모델 크기를 훨씬 작게 만들어 여러 작업에 대한 효율적인 적응을 가능하게 합니다. 다양한 PEFT 방법이 존재하며 선택적, 재매개 변수화 및 가산적 방법으로 분류됩니다. 선택적 방법은 특정 LLM 구성 요소를 미세 조정하지만 트레이드 오프가 있습니다. 재매개 변수화 방법, 예를 들어 LoRA와 같은 것은 LLM 가중치의 저차원 변환을 생성합니다. 가산적 방법은 새로운 훈련 가능한 구성 요소를 도입하며, 어댑터 레이어나 소프트 프롬프트 수정과 같은 것을 포함할 수 있습니다.

이 수업에서는 소프트 프롬프트 기술 중 하나인 프롬프트 튜닝을 탐구하며, 다가오는 콘텐츠에서는 LoRA의 메모리 절감 능력을 논의합니다.

'Generative AI with Large Language Models' 카테고리의 다른 글

PEFT techniques 2: Soft prompts  (0) 2023.08.23
PEFT techniques 1: LoRA  (0) 2023.08.23
Model evaluation  (0) 2023.08.22
Multi-task instruction fine-tuning  (0) 2023.08.22
Fine-tuning on a single task  (0) 2023.08.22

관련글 더보기