상세 컨텐츠

본문 제목

Introduction - Week 2

Generative AI with Large Language Models

by Taeyoon.Kim.DS 2023. 8. 22. 17:46

본문

https://www.coursera.org/learn/generative-ai-with-llms/lecture/hwMrP/introduction-week-2

 

Introduction - Week 2 - Week 2 | Coursera

Video created by deeplearning.ai, Amazon Web Services for the course "Generative AI with Large Language Models". Fine-tuning and evaluating large language models

www.coursera.org

In this video, Mike and Shelby introduce the concept of instruction fine-tuning for large language models (LLMs). Instruction fine-tuning is a crucial step in making LLMs more effective at following specific instructions or prompts. They discuss how LLMs, even after initial pretraining, may not naturally understand and respond to prompts accurately. Instruction fine-tuning helps modify the LLM's behavior to make it more helpful for specific tasks.

They emphasize the significance of instruction fine-tuning as it allows LLMs, initially pretrained on general internet text data, to adapt to following instructions, which is a distinct skill from predicting the next word. They also touch upon challenges like catastrophic forgetting, where fine-tuning might make the model forget previously learned information.

Additionally, they introduce the concept of parametet-efficient fine-tuning (PEFT), which aims to make the fine-tuning process more efficient in terms of computation and memory. PEFT techniques, such as LoRA (Low Rank Adaptation), enable fine-tuning with a smaller memory footprint, making it feasible for a wider range of applications.

Finally, they highlight that developers often choose bttween using a giant, fully fine-tuned model or fine-tuning a smaller model based on their specific requirements and cost constraints.

 

이 동영상에서 Mike와 Shelby는 대규모 언어 모델(LLM)을 위한 명령어 세세한 조정(instruction fine-tuning) 개념을 소개합니다. 명령어 세세한 조정은 LLM을 구체적인 명령어나 프롬프트를 따르도록 만드는 중요한 단계입니다. 그들은 초기 사전 훈련 후에도 LLM이 명령어를 정확하게 이해하고 응답하지 못할 수 있다는 점을 논의하며 명령어 세세한 조정은 LLM의 행동을 특정 작업에 더 도움이 되도록 수정하는 데 도움이 된다고 설명합니다.

그들은 명령어 세세한 조정의 중요성을 강조하며 초기 사전 훈련 시 일반 인터넷 텍스트 데이터에 기초하여 LLM이 명령어를 따르도록 적응할 수 있다는 점을 강조합니다. 이는 다음 단어를 예측하는 것과는 다른 기술로 명령어를 따르도록 만들어 줍니다. 그들은 때로는 치명적인 망각(catastrophic forgetting)과 같은 도전 과제도 언급하며, 이는 모델이 이전에 배운 정보를 잊어버릴 수 있는 상황을 설명합니다.

또한 계산 및 메모리 측면에서 명령어 세세한 조정 프로세스를 효율적으로 만들기 위한 매개 변수 효율적인 세세한 조정(PEFT) 개념을 소개합니다. PEFT 기술인 LoRA(낮은 순위 적응)와 같은 기술은 작은 메모리 풋프린트를 사용하여 세세한 조정을 가능하게 하므로 더 다양한 응용 프로그램에서 사용할 수 있도록 합니다.

마지막으로, 개발자들은 종종 자신의 요구 사항과 비용 제약 사항에 따라 거대하고 완전히 세세하게 조정된 모델 또는 작은 모델을 사용하는 선택을 한다는 점을 강조합니다.

관련글 더보기