상세 컨텐츠

본문 제목

PEFT techniques 2: Soft prompts

Generative AI with Large Language Models

by Taeyoon.Kim.DS 2023. 8. 23. 00:39

본문

https://www.coursera.org/learn/generative-ai-with-llms/lecture/8dnaU/peft-techniques-2-soft-prompts

 

PEFT techniques 2: Soft prompts - Week 2 | Coursera

Video created by deeplearning.ai, Amazon Web Services for the course "Generative AI with Large Language Models". Fine-tuning and evaluating large language models

www.coursera.org

The second parameter-efficient fine-tuning (PEFT) method explored in this lesson is called "prompt tuning." Unlike prompt engineering, where you manually craft prompts to get the desired completions, prompt tuning adds trainable tokens to prompts and lets the model determine their optimal values through supervised learning. These trainable tokens, called soft prompts, are prepended to input text embeddings, and their length is typically between 20 and 100 virtual tokens. Soft prompts are continuous and can take any value within the embedding space.

In prompt tuning, the large language model's weights remain frozen, and only the soft prompts are updated during training. This approach is highly parameter-efficient, requiring training of only a few tokens, as opposed to millions or billions in full fine-tuning. You can train different sets of soft prompts for various tasks and easily swap them out for inference, making it efficient and flexible.

The performance of prompt tuning was compared to other methods in a study. For smaller language models, prompt tuning isn't as effective as full fine-tuning, but as model size increases, so does prompt tuning's performance. For models with around 10 billion parameters, prompt tuning can be as effective as full fine-tuning, offering significant performance gains over prompt engineering alone.

One consideration is the interpretability of learned virtual tokens, as they don't correspond to known words. However, analysis shows that soft prompt tokens form tight semantic clusters, indicating that they capture task-related meanings.

In this lesson, you've explored two PEFT methods: LoRA and Prompt Tuning. Both allow efficient fine-tuning with reduced compute requitements compared to full fine-tuning methods. LoRA is widely used in practice due to its comparable performance and efficiency.

 

이번 수업에서 소개되는 두 번째 매개 변수 효율적 미세 조정 (PEFT) 방법은 "프롬프트 튜닝"입니다. 프롬프트 엔지니어링과 달리, 원하는 완료를 얻기 위해 프롬프트를 수동으로 작성하는 대신 프롬프트 튜닝은 프롬프트에 학습 가능한 토큰을 추가하고 모델이 이러한 토큰의 최적 값을 지도 학습을 통해 결정하도록 합니다. 이러한 학습 가능한 토큰인 "소프트 프롬프트"는 입력 텍스트 임베딩 앞에 추가되며, 일반적으로 20에서 100 가상 토큰 사이의 길이를 가집니다. 소프트 프롬프트는 연속적이며 임베딩 공간 내에서 어떤 값이든 취할 수 있습니다.

프롬프트 튜닝에서는 대형 언어 모델의 가중치가 고정되어 있으며 교육 중에는 소프트 프롬프트만 업데이트됩니다. 이 접근 방식은 가중치 효율적으로 학습 가능한 토큰만 교육하기 때문에 전체 미세 조정에서처럼 수백만 개 또는 수십억 개의 토큰 교육이 필요하지 않습니다. 다양한 작업에 대한 다른 소프트 프롬프트 세트를 교육하고 간편하게 교체하여 추론에 사용할 수 있으므로 효율적이고 유연합니다.

프롬프트 튜닝의 성능은 연구에서 다른 방법들과 비교되었습니다. 작은 언어 모델의 경우 프롬프트 튜닝은 완전 미세 조정만큼 효과적이지 않지만 모델 크기가 커짐에 따라 프롬프트 튜닝의 성능도 향상됩니다. 약 100 억 개의 매개 변수가 있는 모델의 경우 프롬프트 튜닝은 완전 미세 조정만큼 효과적일 수 있으며 프롬프트 엔지니어링만으로 비해 상당한 성능 향상을 제공합니다.

고려해야 할 사항 중 하나는 학습된 가상 토큰의 해석 가능성입니다. 이러한 소프트 프롬프트 토큰은 알려진 단어와 일치하지 않으므로 그 의미를 파악하기 어려울 수 있습니다. 그러나 분석 결과 소프트 프롬프트 토큰이 밀접한 의미 군집을 형성하고 작업 관련 의미를 포착한다는 것을 보여주며, 가장 가까운 이웃 토큰은 유사한 의미를 가진 것으로 나타납니다.

이 수업에서는 LoRA와 프롬프트 튜닝 두 가지 PEFT 방법을 탐색했습니다. 둘 다 전체 미세 조정 방법보다 더 적은 컴퓨팅 요구 사항으로 모델 미세 조정을 허용합니다. LoRA는 효율적인 성능과 효율성으로 인해 실무에서 널리 사용됩니다.

 

'Generative AI with Large Language Models' 카테고리의 다른 글

Introduction - Week 3  (0) 2023.08.23
Lab 2 walkthrough  (0) 2023.08.23
PEFT techniques 1: LoRA  (0) 2023.08.23
Parameter efficient fine-tuning (PEFT)  (0) 2023.08.23
Model evaluation  (0) 2023.08.22

관련글 더보기