Generative AI with Large Language Models

Generative AI project lifecycle

Taeyoon.Kim.DS 2023. 8. 21. 20:26

https://www.coursera.org/learn/generative-ai-with-llms/lecture/21Nwn/generative-ai-project-lifecycle

 

Generative AI project lifecycle - Week 1 | Coursera

Video created by deeplearning.ai, Amazon Web Services for the course "Generative AI with Large Language Models". Generative AI use cases, project lifecycle, and model pre-training

www.coursera.org

In this video, you'll learn about the generative AI project life cycle, which guides you through the development and deployment of an LLM-powered application. The life cycle involves several stages:

Scope Definition: Define the specific function of the LLM in your application. Decide if it needs to perform various tasks or focus on a single, specific task.

Model Selection: Decide whether to use an existing base model or train your own. In most cases, you'll start with an existing model.

Performance Assessment: Assess your model's performance and consider additional training, such as in-context learning or fine-tuning, to improve its performance.

Adapt and Align: Ensure that your model behaves well and aligns with human preferences. Explore techniques like reinforcement learning with human feedback.

Evaluation: Use metrics and benchmarks to evaluate how well your model is performing and if it aligns with your preferences.

Optimize for Deployment: Optimize your model for deployment to make efficient use of compute resources and provide a better user experience.

Infrastructure: Consider any additional infrastructure requirements for your application.

Throughout the course, you'll delve into each stage of the life cycle in detail. It's essential to understand these stages to develop and deploy effective LLM-powered applications.

 

이 비디오에서는 생성적 AI 프로젝트 수명주기에 대해 배웁니다. 이 수명주기는 LLM(언어 모델)을 활용한 애플리케이션 개발 및 배포를 안내하는 프레임워크로, 프로젝트를 개념에서 시작하여 실행까지 이끄는 작업을 매핑합니다. 이 과정을 통해 프로젝트가 개념화에서 실행까지 어떻게 진행되는지 이해할 수 있을 것입니다. 다음은 전체 수명주기의 다이어그램입니다. 단계별로 살펴보겠습니다.

범위 정의: LLM이 애플리케이션에서 수행할 구체적인 기능을 정의합니다. 모델이 여러 작업을 수행해야 하는지 또는 하나의 특정 작업에 중점을 둬야 하는지 결정합니다.

모델 선택: 기존 베이스 모델을 사용할 것인지 아니면 직접 모델을 훈련할 것인지 결정합니다. 대부분의 경우 기존 모델로 시작합니다.

성능 평가: 모델의 성능을 평가하고 성능을 향상시키기 위해 인컨텍스트 러닝 또는 파인튜닝과 같은 추가 훈련을 고려합니다.

적응 및 조율: 모델이 원하는 대로 작동하고 인간의 선호도와 일치하도록 보장합니다. 인간 피드백을 통한 강화 학습과 같은 기술을 탐색합니다.

평가: 모델의 성능 및 선호도 일치 여부를 평가하기 위해 지표와 벤치마크를 사용합니다.

배포를 위한 최적화: 모델을 배포하기 위해 최적화하여 컴퓨팅 리소스를 효율적으로 활용하고 사용자 경험을 개선합니다.

인프라스트럭처: 애플리케이션에 필요한 추가 인프라스트럭처를 고려합니다.

이 과정을 자세히 살펴보기 위해 각 단계를 깊이 있게 다루게 될 것입니다. 효과적인 LLM 기반 애플리케이션을 개발하고 배포하기 위해 이 단계를 이해하는 것이 중요합니다.