상세 컨텐츠

본문 제목

Multi-task instruction fine-tuning

Generative AI with Large Language Models

by Taeyoon.Kim.DS 2023. 8. 22. 19:26

본문

https://www.coursera.org/learn/generative-ai-with-llms/lecture/notob/multi-task-instruction-fine-tuning

 

Multi-task instruction fine-tuning - Week 2 | Coursera

Video created by deeplearning.ai, Amazon Web Services for the course "Generative AI with Large Language Models". Fine-tuning and evaluating large language models

www.coursera.org

Multitask fine-tuning is an extension of single-task fine-tuning, where the training dataset consists of input-output examples for multiple tasks. This dataset includes various tasks like summarization, review rating, code translation, and entity recognition, each instructing the model to perform a specific task. Training the model on this mixed dataset enhances its performance across all tasks simultaneously, avoiding the issue of catastrophic forgetting. After many epochs of training, the model's weights are updated based on calculated losses across examples, resulting in an instruction-tuned model proficient in multiple tasks. While multitask fine-tuning demands a substantial amount of data, possibly 50-100,000 examples, it can yield highly capable models suitable for scenarios requiring strong performance across various tasks.

One example of models trained using multitask instruction fine-tuning is the FLAN family of models, like FLAN-T5 and FLAN-PALM. These models have been fine-tuned on diverse datasets and tasks, making them versatile. For instance, FLAN-T5 has been fine-tuned on 473 datasets across 146 task categories, broadening its capabilities.

In some cases, even a versatile model like FLAN-T5 may require further fine-tuning to excel in specific use cases. For instance, if you are a data scientist developing an app to support customer service tasks, you might need to fine-tune FLAN-T5 on a dialogue dataset more closely aligned with customer service conversations to improve its summarization abilities. This is what you will explore in the lab this week using the "dialogsum" dataset, which contains over 13,000 support chat dialogues and summaries. Fine-tuning with domain-specific data helps the model adapt to your company's specific summarization requirements.

When fine-tuning, it's essential to evaluate the quality of the model's completions. In the next video, you'll learn about various metrics and benchmarks to assess your model's performance and measure the improvement achieved through fine-tuning.

 

다중 작업 세세한 조정(Multitask fine-tuning)은 단일 작업 세세한 조정(single-task fine-tuning)의 확장 버전으로, 훈련 데이터셋은 여러 작업을 위한 입력-출력 예제로 구성됩니다. 이 데이터셋에는 요약, 리뷰 평점, 코드 번역, 엔터티 인식 등과 같은 다양한 작업이 포함되어 있으며 각 작업은 모델이 특정 작업을 수행하도록 지시합니다. 이러한 혼합된 데이터셋에서 모델을 훈련시켜 모든 작업에 대한 성능을 동시에 향상시키며 급격한 망각(catastrophic forgetting) 문제를 피할 수 있습니다. 훈련의 많은 에포크를 거쳐 예제 간의 계산된 손실을 기반으로 모델의 가중치가 업데이트되어 여러 작업에 능숙한 지시 세세한 조정 모델을 얻게 됩니다. 다중 작업 세세한 조정은 상당량의 데이터, 최대 50-100,000개의 예제가 필요할 수 있지만, 여러 작업에서 강력한 성능이 필요한 시나리오에 적합한 모델을 얻을 수 있습니다.

다중 작업 세세한 조정을 통해 훈련된 모델 중 하나인 FLAN 모델의 예는 FLAN-T5 및 FLAN-PALM입니다. 이러한 모델은 다양한 데이터셋과 작업에서 세세한 조정되어 다양한 작업에 적합합니다. 예를 들어, FLAN-T5는 146개 작업 범주에서 473개 데이터셋에 대해 세세한 조정되었습니다.

FLAN-T5와 같은 다재다능한 모델도 경우에 따라 특정 사용 사례에서 뛰어난 성능을 위해 추가 세세한 조정이 필요할 수 있습니다. 예를 들어, 데이터 과학자로서 고객 서비스 업무를 지원하기 위한 앱을 개발하고 있다면, FLAN-T5를 고객 서비스 대화와 보다 일치하는 대화 데이터셋에서 세세한 조정하여 요약 능력을 향상시킬 수 있습니다. 이것은 "dialogsum" 데이터셋을 사용하여 이번 주에 실험해 볼 것입니다. "dialogsum" 데이터셋에는 13,000개 이상의 지원 채팅 대화와 요약이 포함되어 있습니다. 도메인별 데이터를 사용하여 세세한 조정하면 모델이 회사의 특정 요약 요구 사항에 적응할 수 있습니다.

세세한 조정을 수행할 때 모델의 완성 품질을 평가하는 것이 중요합니다. 다음 비디오에서는 모델의 성능을 평가하고 세세한 조정을 통해 얻은 개선 정도를 측정하는 데 사용할 수 있는 다양한 지표와 벤치마크에 대해 알아보게 됩니다.

'Generative AI with Large Language Models' 카테고리의 다른 글

Parameter efficient fine-tuning (PEFT)  (0) 2023.08.23
Model evaluation  (0) 2023.08.22
Fine-tuning on a single task  (0) 2023.08.22
Instruction fine-tuning  (0) 2023.08.22
Introduction - Week 2  (0) 2023.08.22

관련글 더보기