상세 컨텐츠

본문 제목

Computational challenges of training LLMs

Generative AI with Large Language Models

by Taeyoon.Kim.DS 2023. 8. 21. 21:12

본문

https://www.coursera.org/learn/generative-ai-with-llms/lecture/gZArr/computational-challenges-of-training-llms

 

Computational challenges of training LLMs - Week 1 | Coursera

Video created by deeplearning.ai, Amazon Web Services for the course "Generative AI with Large Language Models". Generative AI use cases, project lifecycle, and model pre-training

www.coursera.org

 

One of the major challenges when training large language models (LLMs) is running out of memory on GPUs, especially Nvidia GPUs that use CUDA libraries for deep learning operations. LLMs are massive and require substantial memory to store and train their parameters. For instance, a single parameter is typically represented as a 32-bit float, taking up 4 bytes of memory. To store one billion parameters, you'd need 4 gigabytes of GPU RAM. However, during training, additional components like optimizer states, gradients, activations, and temporary variables further increase memory requirements, roughly up to 20 times the model weight size. This means training a one billion parameter model would require approximately 80 gigabytes of GPU RAM, which is often too much for consumer hardware and even challenging for data center GPUs.

To reduce memory consumption during training, quantization is a technique that lowers the precision of model weights from 32-bit to 16-bit floating point numbers or even 8-bit integer numbers. Quantization projects the original 32-bit values into lower precision spaces, saving memory but sacrificing some precision. BFLOAT16 (BF16), a hybrid between 16-bit and 32-bit precision, has become popular in deep learning as it maintains dynamic range while halving memory requirements. Many LLMs, including FLAN-T5, are pre-trained with BFLOAT16.

Quantization significantly reduces memory requirements, potentially down to 2 gigabytes with 16-bit precision or 1 gigabyte with 8-bit precision for a one billion parameter model. This is crucial because as models grow in size (some exceeding 50 billion parameters), fitting them into GPU memory becomes impossible, necessitating distributed computing across multiple GPUs, which can be costly.

 

대규모 언어 모델 (LLM)을 훈련시킬 때 주로 마주치는 주요 문제 중 하나는 GPU 메모리 부족입니다. 특히 Nvidia GPU를 사용하는 경우 CUDA 라이브러리를 사용하여 심층 학습 작업을 수행하므로 문제가 발생합니다. LLM은 거대하며 매개변수를 저장하고 훈련하기 위해 상당한 메모리가 필요합니다. 예를 들어, 하나의 매개변수는 일반적으로 32비트 부동 소수점으로 표현되며 4바이트의 메모리를 차지합니다. 10억 개의 매개변수를 저장하려면 4기가바이트 GPU RAM이 필요합니다. 그러나 훈련 중에 최적화기 상태, 그래디언트, 활성화 및 함수에서 필요한 임시 변수와 같은 추가 구성 요소로 메모리 요구사항이 더 높아지며, 모델 가중치 크기의 약 20배까지 증가합니다. 이것은 10억 개의 매개변수 모델을 교육하기 위해 약 80기가바이트의 GPU RAM이 필요하다는 것을 의미하며, 종종 소비자 하드웨어에는 너무 많고 데이터 센터 GPU에도 도전적인 양입니다.

훈련 중 메모리 소비를 줄이기 위한 기술 중 하나로 양자화(Quantization)가 있으며, 모델 가중치의 정밀도를 32비트 부동 소수점 숫자에서 16비트 부동 소수점 숫자 또는 8비트 정수 숫자로 낮춥니다. 양자화는 원래 32비트 값을 낮은 정밀도 공간으로 투영하여 메모리를 절약하지만 정밀도를 희생합니다. BFLOAT16 (BF16)이라는 16비트와 32비트의 혼합물은 동적 범위를 유지하면서 메모리 요구 사항을 절반으로 줄여서 심층 학습에서 인기가 있습니다. FLAN-T5를 포함한 많은 LLM은 BFLOAT16으로 사전 훈련됩니다.

양자화는 메모리 요구 사항을 크게 줄여줘서 16비트 정밀도로 2기가바이트 또는 8비트 정밀도로 1기가바이트까지 줄일 수 있습니다. 이것은 모델의 크기가 커짐에 따라 (일부 모델은 500억 개 이상의 매개변수를 갖음) GPU 메모리에 맞추는 것이 불가능해지며, 여러 개의 GPU를 사용한 분산 컴퓨팅이 필요한 경우가 발생하기 때문에 중요합니다. 그러나 이것은 비용이 많이 드는 방식입니다.

관련글 더보기