상세 컨텐츠

본문 제목

Pre-training for domain adaptation

Generative AI with Large Language Models

by Taeyoon.Kim.DS 2023. 8. 21. 21:21

본문

https://www.coursera.org/learn/generative-ai-with-llms/lecture/BMxlN/pre-training-for-domain-adaptation

 

Pre-training for domain adaptation - Week 1 | Coursera

Video created by deeplearning.ai, Amazon Web Services for the course "Generative AI with Large Language Models". Generative AI use cases, project lifecycle, and model pre-training

www.coursera.org

In this video, you'll learn about the situations where it might be necessary to pretrain your own large language model (LLM) from scratch. While using existing LLMs can save a lot of time, there are cases where domain-specific vocabulary and language structures make it challenging for these models to perform well. For example, in legal or medical domains, specialized terms and unique language use can hinder model understanding and performance. In such scenarios, pretraining your own LLM tailored to the domain can be essential.

An example of this is BloombergGPT, a finance-specific LLM introduced in 2023. It was pretrained using a combination of finance and general-purpose text data, achieving top-tier results in financial tasks while maintaining competitive performance in general language understanding. The model was trained with 51% financial data and 49% public data.

The video discusses how BloombergGPT follows scaling laws for model size and training data. It shows graphs comparing the model size and training dataset size of BloombergGPT to these scaling laws. While the model size closely aligns with the Chinchilla approach, the training dataset size is smaller than recommended due to limited availability of financial data. This highlights that real-world constraints may necessitate trade-offs in LLM pretraining.

In summary, pretraining your own LLM from scratch can be essential when dealing with specialized domains, as it allows you to create models optimized for specific vocabulary and language structures.

 

이 동영상에서는 대규모 언어 모델(LLM)을 처음부터 사전 훈련해야 할 필요가 있는 상황에 대해 알아보겠습니다. 기존 LLM을 사용하는 것은 많은 시간을 절약할 수 있지만, 도메인 특정 어휘와 언어 구조 때문에 이러한 모델이 잘 작동하기 어려운 경우가 있습니다. 예를 들어 법적이거나 의료 도메인에서는 전문 용어와 독특한 언어 사용이 모델의 이해와 성능을 어렵게 만들 수 있습니다. 이러한 경우에는 도메인에 맞춘 자체 LLM을 사전 훈련해야 할 수 있습니다.

이러한 예로 2023년에 소개된 금융 전용 LLM인 BloombergGPT가 있습니다. 이 모델은 금융 및 일반 텍스트 데이터를 결합하여 금융 업무에서 최고 수준의 결과를 달성하면서 일반 언어 이해에서 경쟁력 있는 성능을 유지했습니다. 이 모델은 금융 데이터 51%와 공용 데이터 49%를 사용하여 사전 훈련되었습니다.

이 동영상에서는 BloombergGPT의 모델 크기와 교육 데이터 크기를 조절하기 위한 스케일링 법칙을 따르는 방법을 설명합니다. 이 모델의 모델 크기와 교육 데이터 집합 크기를 이러한 스케일링 법칙과 비교한 그래프를 보여줍니다. 모델 크기는 Chinchilla 접근법과 밀접하게 일치하지만 교육 데이터 집합 크기는 금융 데이터의 한정된 가용성 때문에 권장 사항보다 작습니다. 이것은 실제 세계 제약 조건이 LLM 사전 훈련에서 트레이드 오프를 요구할 수 있다는 점을 강조합니다.

요약하면, 특정 도메인을 다룰 때 특정 어휘와 언어 구조에 최적화된 모델을 만들 수 있도록 하는 도메인에 맞춘 LLM을 처음부터 사전 훈련하는 것은 필수일 수 있습니다.

 

""" imagine you're a developer building an app to help lawyers and paralegals summarize legal briefs. Legal writing makes use of very specific terms like mens rea in the first example and res judicata in the second. These words are rarely used outside of the legal world, which means that they are unlikely to have appeared widely in the training text of existing LLMs. As a result, the models may have difficulty understanding these terms or using them correctly. Another issue is that legal language sometimes uses everyday words in a different context, like consideration in the third example. Which has nothing to do with being nice, but instead refers to the main element of a contract that makes the agreement enforceable."""

관련글 더보기