상세 컨텐츠

본문 제목

Generative configuration

Generative AI with Large Language Models

by Taeyoon.Kim.DS 2023. 8. 21. 19:52

본문

https://www.coursera.org/learn/generative-ai-with-llms/lecture/18SPI/generative-configuration

In this video, you'll learn about methods and configuration parameters to influence the way a language model makes decisions about generating the next word. These parameters are available during inference and can affect factors like completion length and creativity.

Max New Tokens: This parameter limits the number of tokens the model generates. It's not a fixed number but rather sets an upper limit. For example, if set to 100, the model can produce fewer tokens if another stopping condition is reached.

Greedy Decoding: By default, most large language models use greedy decoding, where they select the word with the highest probability as the next word. While simple, it may result in repetitive or less natural text.

Random Sampling: This method introduces variability by having the model randomly choose words based on their probabilities. It reduces word repetition but can lead to nonsensical text.

Top-k Sampling: You specify a k value, and the model chooses from the top k tokens with the highest probabilities. This maintains some randomness while ensuring more sensible choices.

Top-p Sampling: With a p value, the model selects tokens until the cumulative probability exceeds p. It balances randomness and coherence.

Temperature: This parameter influences the shape of the probability distribution for the next token. Higher values increase randomness, while lower values make the model more deterministic.

These parameters allow you to control the trade-off between randomness and coherence in the model's output. Adjusting them can help you generate text that suits your specific needs.

 

이 비디오에서는 모델이 다음 단어를 생성하는 방식에 영향을 미치는 방법과 구성 매개변수에 대해 알아보겠습니다. 이러한 매개변수는 추론 중에 사용할 수 있으며 완료 길이 및 창조성과 같은 요소에 영향을 미칠 수 있습니다.

최대 새 토큰 (Max New Tokens): 이 매개변수는 모델이 생성하는 토큰 수를 제한합니다. 고정된 수가 아닌 상한선을 설정합니다. 예를 들어 100으로 설정하면 다른 중지 조건이 도달하면 모델이 더 적은 토큰을 생성할 수 있습니다.

그리디 디코딩 (Greedy Decoding): 대부분의 대형 언어 모델은 기본적으로 그리디 디코딩을 사용하며, 가장 높은 확률을 가진 단어를 다음 단어로 선택합니다. 간단하지만 반복적이거나 자연스럽지 않은 텍스트를 생성할 수 있습니다.

랜덤 샘플링 (Random Sampling): 이 방법은 모델이 확률에 따라 단어를 무작위로 선택하도록하여 단어 반복을 줄입니다. 그러나 무의미한 텍스트를 생성할 수 있습니다.

상위-k 샘플링 (Top-k Sampling): k 값을 지정하고 모델은 상위 k개 확률이 높은 토큰에서 선택합니다. 일부 무작위성을 유지하면서 더 합리적인 선택을 보장합니다.

상위-p 샘플링 (Top-p Sampling): p 값을 사용하여 누적 확률이 p를 초과할 때까지 토큰을 선택합니다. 무작위성과 일관성을 균형잡히게 유지합니다.

온도 (Temperature): 이 매개변수는 다음 토큰을 위한 확률 분포 모양에 영향을 미칩니다. 높은 값은 무작위성을 높이고, 낮은 값은 모델을 보다 결정적으로 만듭니다.

이러한 매개변수를 통해 모델 출력의 무작위성과 일관성 사이의 균형을 조절할 수 있습니다. 이러한 조절을 통해 특정 필요에 맞는 텍스트를 생성하는 데 도움이 될 수 있습니다.

관련글 더보기