상세 컨텐츠

본문 제목

Fine-tuning on a single task

Generative AI with Large Language Models

by Taeyoon.Kim.DS 2023. 8. 22. 18:20

본문

https://www.coursera.org/learn/generative-ai-with-llms/lecture/cTZRI/fine-tuning-on-a-single-task

 

Fine-tuning on a single task - Week 2 | Coursera

Video created by deeplearning.ai, Amazon Web Services for the course "Generative AI with Large Language Models". Fine-tuning and evaluating large language models

www.coursera.org

While large language models (LLMs) are known for their versatility in handling various language tasks, your application might require expertise in only one specific task. In such cases, you can fine-tune a pre-trained LLM to excel in that particular task. For instance, fine-tuning for text summarization using a dataset tailored to that task. Surprisingly, even a relatively small number of examples, usually 500-1,000, can significantly enhance performance, despite the model having processed billions of text pieces during pre-training.

However, fine-tuning for a single task can pose a potential challenge known as "catastrophic forgetting." This occurs because the fine-tuning process modifies the original LLM's weights. While this leads to superior performance on the fine-tuned task, it can result in a decline in performance on other tasks. For example, a model that originally excelled in named entity recognition might lose this ability after fine-tuning, causing confusion between entities and showing behavior associated with the new task.

To mitigate catastrophic forgetting, you have several options. Firstly, assess whether this phenomenon truly impacts your use case. If your primary concern is reliable performance on the specific fine-tuned task, the model's inability to generalize to other tasks may not be a problem. Alternatively, you can choose to perform multitask fine-tuning, which involves fine-tuning on multiple tasks simultaneously. This approach typically requires a substantial amount of data and computing power, around 50-100,000 examples across various tasks.

The second option is parameter-efficient fine-tuning (PEFT), a technique that maintains the majority of the original LLM's weights while training only a limited number of task-specific adapter layers and parameters. PEFT is less susceptible to catastrophic forgetting and will be discussed in more detail later in this course.

 

대형 언어 모델(LLMs)은 다양한 언어 작업을 처리하는 다재다능함으로 유명하지만 귀하의 애플리케이션은 특정 작업에 대한 전문성만 필요할 수 있습니다. 이러한 경우, 미리 훈련된 LLM을 해당 작업에 뛰어난 성능을 발휘하도록 세세하게 조정할 수 있습니다. 예를 들어, 해당 작업에 맞춘 데이터셋을 사용하여 텍스트 요약을 위해 세세하게 조정할 수 있습니다. 놀랍게도 비교적 적은 수의 예제, 일반적으로 500~1,000개 정도로도 성능을 크게 향상시킬 수 있으며, 모델은 사전 훈련 과정에서 수십 억 개의 텍스트 조각을 처리한 바 있다.

그러나 단일 작업을 위한 세세한 조정은 "급격한 망각(catastrophic forgetting)"이라고 불리는 잠재적인 문제를 야기할 수 있습니다. 이 현상은 세세한 조정 프로세스가 원래 LLM의 가중치를 수정하기 때문에 발생합니다. 이로 인해 세세한 조정된 작업에서 우수한 성능을 끌어내기는 하지만 다른 작업에서 성능이 저하될 수 있습니다. 예를 들어, 원래 개체 이름 인식에서 뛰어난 성능을 발휘한 모델이 세세한 조정 후에 이 능력을 상실하면 엔터티 사이의 혼란을 초래하고 새 작업과 관련된 행동을 보일 수 있습니다.

급격한 망각을 완화하기 위해 여러 가지 옵션이 있습니다. 첫 번째로, 이 현상이 실제로 귀하의 사용 사례에 영향을 미치는지를 평가하세요. 특정 세세한 조정된 작업에서 신뢰할 수 있는 성능이 주요 관심사인 경우, 모델이 다른 작업에 대해 일반화할 수 없다는 점은 문제가 되지 않을 수 있습니다. 또는 여러 작업에서 세세한 조정을 수행할 수 있는 다중 작업 세세한 조정을 선택할 수 있습니다. 이 접근 방식은 일반적으로 여러 작업에서 약 50~100,000개의 예제와 다양한 작업을 포함하며 상당한 양의 데이터 및 컴퓨팅 파워가 필요합니다.

두 번째 옵션은 "파라미터 효율적 세세한 조정(PEFT)"이라는 기법을 사용하는 것입니다. 이 기술은 원래 LLM의 대부분의 가중치를 유지하면서 일부 작업별 어댑터 레이어와 매개변수만을 훈련합니다. PEFT는 급격한 망각에 민감도가 낮으며 이 강좌에서 나중에 더 자세히 다룰 예정입니다.

'Generative AI with Large Language Models' 카테고리의 다른 글

Model evaluation  (0) 2023.08.22
Multi-task instruction fine-tuning  (0) 2023.08.22
Instruction fine-tuning  (0) 2023.08.22
Introduction - Week 2  (0) 2023.08.22
Pre-training for domain adaptation  (0) 2023.08.21

관련글 더보기