상세 컨텐츠

본문 제목

Introduction

AWS Certified Machine Learning Specialty

by Taeyoon.Kim.DS 2023. 9. 14. 22:40

본문

https://www.udemy.com/course/aws-machine-learning/learn/lecture/16666294#content

 

In this Udemy course on AWS Certified Machine Learning Specialty, the instructor begins by introducing Amazon S3 (Simple Storage Service) and its key concepts. S3 allows you to store objects or files in buckets, and these buckets must have globally unique names. Objects within S3 are essentially files with unique keys representing their full path. The instructor highlights that S3 supports large object sizes, up to five terabytes, and object tags can be added for data classification and security.

For machine learning, S3 serves as the backbone for many AWS ML services, including SageMaker. It's ideal for creating a data lake due to its infinite sizing, high durability, and separation of storage from compute services. S3 supports various file formats, making it suitable for diverse data types such as CSV, JSON, Parquet, and more.

The course also covers data partitioning, a technique for optimizing range queries, especially useful with Amazon Athena. Partitioning involves organizing data within S3 buckets based on criteria like date or product ID to facilitate faster data retrieval. The instructor demonstrates how to create an S3 bucket and set up a basic partitioning structure.

Summary translated into Korean:

이 Udemy AWS Certified Machine Learning Specialty 강의에서 강사는 Amazon S3 (간단한 저장소 서비스)와 그 주요 개념을 소개합니다. S3를 사용하면 버킷에 개체 또는 파일을 저장할 수 있으며 이러한 버킷은 전 세계에서 고유한 이름을 가져야 합니다. S3 내 개체는 실질적으로 완전한 경로를 나타내는 고유한 키를 가진 파일입니다. 강사는 S3가 큰 객체 크기를 지원하며 최대 5 테라바이트까지 지원한다는 점을 강조하며 데이터 분류 및 보안을 위해 객체 태그를 추가할 수 있다고 강조합니다.

머신 러닝에서 S3는 SageMaker를 비롯한 여러 AWS ML 서비스의 중심 역할을 합니다. 무한한 크기, 높은 내구성 및 저장소와 컴퓨팅 서비스의 분리로 인해 데이터 호수를 생성하는 데 이상적입니다. S3는 CSV, JSON, Parquet 등과 같은 다양한 파일 형식을 지원하여 다양한 데이터 유형에 적합합니다.

강의에서는 범위 쿼리를 최적화하는 기술인 데이터 분할도 다룹니다. 특히 Amazon Athena와 함께 사용하기에 유용하며 날짜 또는 제품 ID와 같은 기준에 따라 S3 버킷 내 데이터를 조직화하는 것을 의미합니다. 강사는 S3 버킷을 생성하고 기본 분할 구조를 설정하는 방법을 실습으로 보여줍니다.

 

* S3 allows you to store objects (files) in "buckets"

* Buekcts must have a globally unique name

Objects (files) have a Key. The key is the FULL path:

  * <my_bucket>/my_file.txt
  * <my_bucket>/my_folder/another_folder/my_file.txt

* This will be interesting for partitioning

* Max object size is 5TB

* Object Tags (key / value pair - up to 10) - useful for security / lifecycle

 

### Amazon S3 for ML

* Backbone for many AWS ML services (example: SageMaker)

* Create a "Data Lake"
Infinite size, no provisioning

99.999999999% durability

Decoupling of storage to compute

* Centralised Architecture

* Object storege ==> supports any file format

 

### Amazon S3 Data Partitioning

* Pattern for speeding up range queries (AWS Athena)

* By Date : s3://bucket/my-data-set/year-month/day/hour/data_00.csv

* By Product:  s3://bucket/my-data-set/product-id/data_32.csv

* You can defin whatever partitioning strategy you like

* Data partitioning will be handled by some tools we use (AWS Glue)

Staging environment s3-partitioning.

 

관련글 더보기