AWS Certified Solutions Architect Associate (SAA-C03) Exam Notes
AWS 认证解决方案架构师助理 (SAA-C03) 考试笔记
If you are planning or preparing for AWS Certified Solutions Architect Associate (SAA-C03) exam then this article is for you to get started.
如果您正在计划或准备 AWS 认证解决方案架构师助理 (SAA-C03) 考试,那么本文将帮助您入门。
Overview 概述
FAQs 常见问题
- Prepare well for the exam, it is the toughest exam I cracked in recent years.
请为考试做好充分准备,这是我近年来通过的最难的考试。 - Requires 2 to 3 month of preparation depending upon your commitment per day.
根据您每天的学习投入情况,需要 2 到 3 个月的时间准备。 - Exam code is SAA-C03 (third version) and cost you 150 USD per attempt.
考试代码为 SAA-C03(第三版),每次考试费用为 150 美元。 - You need to solve 65 questions in 130 mins from your laptop under the supervision of online proctor.
您需要在在线监考人员的监督下,在 130 分钟内通过笔记本电脑回答 65 个问题。 - Passing score is 720 (out of 1000) means you should answer at least 47 (out of 65) questions correctly. No negative scoring so answer all the questions!
通过分数是 720 分(满分 1000 分),这意味着您至少需要答对 47 道题(共 65 道题)。没有倒扣分,所以请务必回答所有问题! - You get the result (Pass or Fail) once you submit the exam, however you don’t receive any email immediately. It generally takes 2-3 days. I received an email with digital certificate, score-card and badge after two days. You can also login to AWS training to get them later.
提交考试后,您会立即收到“通过”或“失败”的结果,但不会立即收到电子邮件。通常需要 2-3 天。两天后我收到了一封包含数字证书、成绩单和徽章的电子邮件。您也可以稍后登录 AWS Training 获取它们。 - You can schedule exam with Pearson VUE or PSI. I heard bad reviews about PSI and chose Pearson VUE for my exam. The exam went smooth.
您可以通过 Pearson VUE 或 PSI 安排考试。我听说 PSI 的评价不佳,因此我选择了 Pearson VUE 进行考试。考试过程很顺利。 - You get discount vouchers under Benefits tab of AWS training portal once you crack at least one AWS exam. You can use these vouchers for subsequent exams.
一旦您通过至少一次 AWS 考试,您就可以在 AWS 培训门户的“福利”选项卡下获得折扣券。您可以在后续考试中使用这些折扣券。 - Exam Guide for more details.
有关更多详细信息,请参阅考试指南。
Learning Path 学习路径
I followed these four steps for the preparation of AWS exam:-
我为准备 AWS 考试遵循了以下四个步骤:
1. Watch Videos 1. 观看视频
First step to your learning path is to go through AWS lecture and training videos, which is easiest way to get familiar with AWS Services. It might take 1-2 months to cover all the AWS services depending upon your daily commitment. I recommend the following lecture videos:-
学习路径的第一步是观看 AWS 讲座和培训视频,这是熟悉 AWS 服务最简单的方法。根据您的日常投入,可能需要 1-2 个月才能涵盖所有 AWS 服务。我推荐以下讲座视频:-
- CloudGuru - 35+ hours videos by Ryan Kroonenburg with course quizzes, Hands-on labs, 1 practice exam. They also provide AWS Sandbox for unlimited Hands-on when you buy subscription.
CloudGuru - Ryan Kroonenburg 提供的 35 多个小时的视频,包含课程测验、动手实验、1 套练习题。当你购买订阅时,他们还提供 AWS Sandbox 以供无限次动手操作。 - Udemy - 26+ hours videos by Stephane Maarek with course quizzes, Hands-on labs, 1 practice exam.
Udemy - Stephane Maarek 提供的 26+ 小时视频,包含课程测验、实践实验、1 次模拟考试。 - FreeCodeChamp - 10+ hours of amazing video on Youtube, absolutely free!
FreeCodeChamp - YouTube 上 10+ 小时的精彩视频,完全免费!
Hands-on AWS Services is very important to visualize AWS services and retain your AWS learning for a long time.
动手操作 AWS 服务对于可视化 AWS 服务和长期保留你的 AWS 学习成果非常重要。
2. Practice Exam 2. 练习题
Watching videos are not enough! You must solve as many practice exams as you can. They gives you a very fair understanding of what to expect in real exam. I recommend the following practice exams:-
光看视频是不够的!你必须尽可能多地做练习题。它们能让你非常公平地了解真实考试会是什么样子。我推荐以下练习题:
- Whizlabs - 7 practice tests (65 questions each) and also has topic-wise practice tests.
Whizlabs - 7 次模拟测试(每次 65 道题),还有按主题划分的练习题。 - Udemy - 6 practice tests (65 questions each) by John Bonso
Udemy - John Bonso 的 6 次模拟测试(每次 65 道题) - AWS - Sample Questions - 10 sample questions
AWS - 示例问题 - 10 个示例问题
3. Next Step 3. 下一步
You can read the following to build confidence:-
您可以阅读以下内容来建立信心:-
- AWS services FAQs - You will find the answers in FAQs for most of the questions
AWS 服务常见问题解答 - 您将在常见问题解答中找到大多数问题的答案 - VPC Analogy - Interesting read to understand the difficult topic VPC & Networking
VPC 类比 - 了解困难的 VPC 和网络主题的有趣读物 - AWS Well-Architected White papers
AWS Well-Architected 白皮书 - AWS Ramp-Up Guide: Architect
AWS 快速入门指南:架构师
4. Last Step 4. 最后一步
Once you are done with your preparation and ready for the exam, go through the below exam notes for your last day of preparation:-
完成准备并准备好参加考试后,请回顾以下考试笔记,为您的最后一天备考做准备:-
These AWS certification exam notes are the result of watching 50+ hours of AWS training videos, solving 1000+ AWS exam questions, reading AWS services FAQs and White papers. Best of luck with your exam preparation!
这些 AWS 认证考试笔记是观看 50 多个小时的 AWS 培训视频、解答 1000 多个 AWS 考试问题、阅读 AWS 服务常见问题解答和白皮书的结果。祝您考试顺利!
AWS Infrastructure AWS 基础设施
AWS Region AWS 区域
- AWS regions are physical locations around the world having a cluster of data centers.
AWS 区域是全球的物理位置,拥有一个数据中心集群。| AWS Region | Code | |---------------------------|----------------| | US East (N. Virginia) | us-east-1 | | US East (Ohio) | us-east-2 | | US West (N. California) | us-west-1 | | US West (Oregon) | us-west-2 | | Africa (Cape Town) | af-south-1 | | Asia Pacific (Hong Kong) | ap-east-1 | | Asia Pacific (Mumbai) | ap-south-1 | | Asia Pacific (Osaka) | ap-northeast-3 | | Asia Pacific (Seoul) | ap-northeast-2 | | Asia Pacific (Singapore) | ap-southeast-1 | | Asia Pacific (Sydney) | ap-southeast-2 | | Asia Pacific (Tokyo) | ap-northeast-1 | | Canada (Central) | ca-central-1 | | Europe (Frankfurt) | eu-central-1 | | Europe (Ireland) | eu-west-1 | | Europe (London) | eu-west-2 | | Europe (Milan) | eu-south-1 | | Europe (Paris) | eu-west-3 | | Europe (Stockholm) | eu-north-1 | | Middle East (Bahrain) | me-south-1 | | South America (São Paulo) | sa-east-1 | - You need to select the region first for most of the AWS services such as EC2, ELB, S3, Lambda, etc.
对于大多数 AWS 服务,例如 EC2、ELB、S3、Lambda 等,您需要先选择区域。 - You can not select region for Global AWS services such as IAM, AWS Organizations, Route 53, CloudFront, WAF, etc.
您无法为全局 AWS 服务选择区域,例如 IAM、AWS Organizations、Route 53、CloudFront、WAF 等。 - Each AWS Region consists of multiple, isolated, and physically separate AZs (Availability Zones) within a geographic area.
每个 AWS 区域由地理区域内多个隔离且物理上分开的可用区 (AZ) 组成。
AZ (Availability zones) 可用区 (Availability zones)
- An AZ is one or more discrete data centers with redundant power, networking, and connectivity
一个可用区(AZ)包含一个或多个具有冗余电源、网络和连接性的独立数据中心。 - All AZs in an AWS Region are interconnected with high-bandwidth, low-latency networking.
AWS 区域内的所有可用区都通过高带宽、低延迟的网络相互连接。 - Customer deploy applications across multiple AZs in same region for high-availability, scalability, fault-tolerant and low-latency.
客户在同一区域内的多个可用区部署应用程序,以实现高可用性、可伸缩性、容错能力和低延迟。 - AZs in a region are usually 3, min is 2 and max is 6 for e.g. 3 AZs in Ohio are us-east-2a, us-east-2b, and us-east-2c.
一个区域内的可用区通常有 3 个,最少 2 个,最多 6 个。例如,俄亥俄州有 3 个可用区:us-east-2a、us-east-2b 和 us-east-2c。 - For high availability in us-east-2 region with min 6 instances required either place 3 instances in each 3 AZs or place 6 instances in each 2 AZs (choose any 2 AZs out of 3) so that it works normal when 1 AZ goes down.
为了在 us-east-2 区域实现高可用性,并且最少需要 6 个实例,可以将 3 个实例放置在每个 3 个可用区中,或者将 6 个实例放置在每个 2 个可用区中(从 3 个可用区中选择任意 2 个可用区),这样当一个可用区出现故障时,系统仍能正常运行。
Security, Identity & Compliance
安全、身份与合规
IAM (Identity and Access Management)
IAM (身份和访问管理)
- IAM is used to manage access to users and resources
IAM 用于管理对用户和资源的访问 - IAM is a global service (applied to all the regions at the same time). IAM is a free service.
IAM 是一个全局服务(同时应用于所有区域)。IAM 是一个免费服务。 - Root account is created by default with full administrator, shouldn’t be used
根账户默认创建,拥有完全管理员权限,不应使用 - Users mapped to physical user, should login to AWS console with their own account and password
用户映射到物理用户,应使用自己的账户和密码登录 AWS 控制台 - Groups can have one or more users, can not have other groups
组可以包含一个或多个用户,不能包含其他组 - Policies are JSON documents that Allow or Deny the access on action can be performed on AWS resource by any user, group and role
策略是 JSON 文档,用于允许或拒绝任何用户、组和角色在 AWS 资源上执行的操作的访问。- Version policy language version. 2012-10-17 is latest version.
Version 策略语言版本。2012-10-17 是最新版本。 - Statement container for one or more policy statements
Statement 包含一个或多个策略语句的容器 - Sid (optional) a way of labeling your policy statement
Sid (可选) 标记策略语句的方式 - Effect set whether the policy Allow or Deny
效果 设置策略是允许还是拒绝 - Principal user, group, role, or federated user to which you would like to allow or deny access
主体 您希望允许或拒绝其访问的用户、组、角色或联合用户 - Action one or more actions that can be performed on AWS resources
操作 一个或多个可在 AWS 资源上执行的操作 - Resource one or more AWS resource to which actions apply
资源 一个或多个 AWS 资源,操作将应用于这些资源 - Condition (optional) one or more conditions to satisfy for policy to be applicable, otherwise ignore the policy
条件(可选)策略适用的一个或多个条件,否则忽略该策略
{ "Version":"2012-10-17", "Statement":[{ "Sid": "Deny-Barclay-S3-Access", "Effect":"Deny", "Principal": { "AWS": ["arn:aws:iam:123456789012:barclay"] }, "Action": [ "s3:GetObject", "s3:PutObject", "s3:List*" ], "Resource": ["arn:aws:s3:::mybucket/*"] },{ "Effect": "Allow", "Action": "iam:CreateServiceLinkedRole", "Resource": "*", "Condition": { "StringLike": { "iam:AWSServiceName": [ "rds.amazonaws.com", "rds.application-autoscaling.amazonaws.com" ] } } }] } - Version policy language version. 2012-10-17 is latest version.
- Roles are associated with trusted entities - AWS services (EC2, Lambda, etc), Another AWS account, Web Identity (Cognito or any OpenID provider), or SAML 2.0 federation (your corporate directory). You attach policy to the role, these entities assume the role to access the AWS resources.
角色与受信任实体相关联——AWS 服务(EC2、Lambda 等)、另一个 AWS 账户、Web 身份(Cognito 或任何 OpenID 提供商)或 SAML 2.0 联合(您的公司目录)。您将策略附加到角色,这些实体会代入角色以访问 AWS 资源。 - Least Privilege Principle should be followed in AWS, don’t give more permission than a user needs.
在 AWS 中应遵循最小权限原则,不要授予用户超出其所需范围的权限。 - Resource Based Policies are supported by S3, SNS, and SQS
S3、SNS 和 SQS 支持基于资源的策略。 - IAM Permission Boundaries to set at individual user or role for maximum allowed permissions
IAM 权限边界用于为单个用户或角色设置允许的最大权限 - IAM Policy Evaluation Logic ➔ Explicit Deny ➯ Organization SCPs ➯ Resource-based Policies (optional) ➯ IAM Permission Boundaries ➯ Identity-based Policies
IAM 策略评估逻辑 ➔ 显式拒绝 ➯ 组织 SCP ➯ 基于资源的策略(可选)➯ IAM 权限边界 ➯ 基于身份的策略 - If you got SSL/TLS certificates from third-party CA, import the certificate into AWS Certificate Manager (ACM) or upload it to the IAM Certificate Store
如果您从第三方 CA 获取了 SSL/TLS 证书,请将证书导入 AWS Certificate Manager (ACM) 或上传到 IAM 证书存储
Access AWS programmatically
以编程方式访问 AWS
- AWS Management Console - Use password + MFA (multi factor authentication)
AWS 管理控制台 - 使用密码 + MFA(多因素身份验证) - AWS CLI or SDK - Use Access Key ID (~username) and Secret Access Key (~password)
AWS CLI 或 SDK - 使用访问密钥 ID(~用户名)和秘密访问密钥(~密码)$ aws --version $ aws configure AWS Access Key ID [None]: AES Secret Access Key [None]: Default region name [None]: Default output format [None]: $ aws iam list-users - AWS CloudShell - CLI tool from AWS browser console - Require login to AWS
AWS CloudShell - AWS 浏览器控制台提供的 CLI 工具 - 需要登录 AWS
Access AWS for Non-IAM users
非 IAM 用户访问 AWS
- Non-IAM user first authenticate from Identity Federation. Then provide a temporary token (IAM Role attached) generated by calling a AssumeRole API of STS (Security Token Service). Non-IAM user access the AWS resource by assuming IAM Role attached with token.
非 IAM 用户首先通过身份联合进行身份验证。然后,通过调用 STS(安全令牌服务)的 AssumeRole API,提供一个由 STS 生成的临时令牌(附加了 IAM 角色)。非 IAM 用户通过承担附加了令牌的 IAM 角色来访问 AWS 资源。 - You can authenticate and authorize Non-IAM users using following Identity Federation:-
您可以使用以下身份联合来对非 IAM 用户进行身份验证和授权:-- SAML 2.0 (old) to integrate Active Directory/ADFS, use AssumeRoleWithSAML STS API
SAML 2.0(旧)集成 Active Directory/ADFS,使用 AssumeRoleWithSAML STS API - Custom Identity Broker used when identity provider is not compatible to SAML 2.0, use AssumeRole or GetFederationToken STS API
Custom Identity Broker 用于身份提供商不兼容 SAML 2.0 的情况,使用 AssumeRole 或 GetFederationToken STS API - Web Identity Federation is used to sign in using well-known external identity provider (IdP), such as login with Amazon, Facebook, Google, or any OpenID Connect (OIDC)-compatible IdP. Get the ID token from IdP, use AWS Cognito api to exchange ID token with cognito token, use AssumeRoleWithWebIdentity STS API to get temp security credential to access AWS resources
Web Identity Federation 用于使用知名的外部身份提供商 (IdP) 登录,例如使用 Amazon、Facebook、Google 或任何 OpenID Connect (OIDC) 兼容的 IdP 登录。从 IdP 获取 ID 令牌,使用 AWS Cognito API 将 ID 令牌与 Cognito 令牌进行交换,然后使用 AssumeRoleWithWebIdentity STS API 获取临时安全凭证以访问 AWS 资源 - AWS Cognito is recommended identity provider by Amazon
AWS Cognito 是 Amazon 推荐的身份提供商 - Amazon Single Sign On gives single sign-on token to access AWS, no need to call STS API
Amazon Single Sign-On 提供访问 AWS 的单一登录令牌,无需调用 STS API
- SAML 2.0 (old) to integrate Active Directory/ADFS, use AssumeRoleWithSAML STS API
- You can use AWS Directory Service to manage Active Directory (AD) in AWS for e.g.
您可以使用 AWS Directory Service 在 AWS 中管理 Active Directory (AD),例如- AWS Managed Microsoft AD is managed Microsoft Windows Server AD with trust connection to on-premise Microsoft AD. Best choice when you need all AD features to support AWS applications or Windows workloads. can be used for single sign-on for windows workloads.
AWS Managed Microsoft AD 是一个托管的 Microsoft Windows Server AD,可以与本地 Microsoft AD 建立信任连接。当您需要所有 AD 功能来支持 AWS 应用程序或 Windows 工作负载时,它是最佳选择。它还可以用于 Windows 工作负载的单点登录。 - AD Connector is proxy service to redirect requests to on-premise Microsoft AD. Best choice to use existing on-premise AD with compatible AWS services.
AD Connector 是一个代理服务,用于将请求重定向到本地 Microsoft AD。这是将现有的本地 AD 与兼容的 AWS 服务配合使用的最佳选择。 - Simple AD is standalone AWS managed compatible AD powered by Samba 4 with basic directory features. You cannot connect it to on-premise AD. Best choice for basic directory features.
Simple AD 是一个独立的、AWS 管理的、兼容 AD 的服务,它由 Samba 4 提供支持,并具备基本的目录功能。您无法将其连接到本地 AD。它是满足基本目录功能需求的最佳选择。 - Amazon Cognito is a user directory for sign-up and sign-in to mobile and web application using Cognito User Pools. Nothing to do with Microsoft AD.
Amazon Cognito 是一个用户目录,用于通过 Cognito 用户池为移动和 Web 应用程序实现注册和登录。它与 Microsoft AD 无关。
- AWS Managed Microsoft AD is managed Microsoft Windows Server AD with trust connection to on-premise Microsoft AD. Best choice when you need all AD features to support AWS applications or Windows workloads. can be used for single sign-on for windows workloads.
Amazon Cognito
- Cognito User Pools (CUP)
Cognito 用户池 (CUP)- User Pools is a user directory for sign-up and sign-in to mobile and web applications.
用户池是用于移动和 Web 应用程序注册和登录的用户目录。 - User pool is mainly used for authentication to access AWS services
用户池主要用于身份验证以访问 AWS 服务 - Use to authenticate mobile app users through user pool directory, or federated through third-party identity provider (IdP). The user pool manages the overhead of handling the tokens that are returned from social sign-in through Facebook, Google, Amazon, and Apple, and from OpenID Connect (OIDC) and SAML IdPs.
用于通过用户池目录或通过第三方身份提供商 (IdP) 进行联合身份验证来验证移动应用程序用户。用户池负责处理从 Facebook、Google、Amazon 和 Apple 的社交登录以及 OpenID Connect (OIDC) 和 SAML IdP 返回的令牌。 - After successful authentication, your web or mobile app will receive user pool JWT tokens from Amazon Cognito. JWT token can be used in two ways:-
成功进行身份验证后,您的 Web 或移动应用将从 Amazon Cognito 收到用户池 JWT 令牌。JWT 令牌的用途有两种:- You use JWT tokens to retrieve temporary AWS credentials that allow your app to access other AWS services.
您可以使用 JWT 令牌检索临时 AWS 凭证,这些凭证允许您的应用访问其他 AWS 服务。 - You create group in user pool with IAM role to access API Gateway, then you can use JWT token (for that group) to access Amazon API Gateway.
您创建一个用户池中的用户组,并为其分配一个 IAM 角色以访问 API Gateway,然后您可以使用该用户组的 JWT 令牌来访问 Amazon API Gateway。
- You use JWT tokens to retrieve temporary AWS credentials that allow your app to access other AWS services.
- User Pools is a user directory for sign-up and sign-in to mobile and web applications.
- Cognito Identity Pools (Federated Identity)
Cognito 身份池 (联合身份)- Identity pool is mainly used for authorization to access AWS services
身份池主要用于授权访问 AWS 服务 - You first authenticate user using User Pools and then exchange token with Identity Pools which further use AWS STS to generate temporary AWS credentials to access AWS Resources.
您首先使用用户池对用户进行身份验证,然后将令牌与身份池进行交换,身份池会进一步使用 AWS STS 生成临时 AWS 凭证以访问 AWS 资源。 - You can provide temporary access to write to S3 bucket using facebook/google login to your mobile app users.
您可以通过 Facebook/Google 登录为移动应用用户提供对 S3 存储桶的临时写入访问权限。 - Supports guest users 支持访客用户
- Identity pool is mainly used for authorization to access AWS services
AWS Key Management Service (KMS)
AWS 密钥管理服务 (KMS)
- AWS managed centralized key management service to create, manage and rotate customer master keys (CMKs) for encryption at rest.
AWS 托管的集中式密钥管理服务,用于创建、管理和轮换用于静态加密的客户主密钥 (CMK)。 - You can create customer-managed Symmetric (single key for both encrypt and decrypt operations) or Asymmetric (public/private key pair for encrypt/decrypt or sign/verify operations) master keys
您可以创建客户管理的对称(用于加密和解密操作的单个密钥)或非对称(用于加密/解密或签名/验证操作的公钥/私钥对)主密钥。 - You can enable automatic master key rotation once per year. Service keeps the older version of master key to decrypt old encrypted data.
您可以启用每年一次的自动主密钥轮换。服务会保留主密钥的旧版本以解密旧的加密数据。
AWS CloudHSM
- AWS managed dedicated hardware security model (HSM) in AWS Cloud
AWS 云中的 AWS 托管专用硬件安全模块 (HSM) - Enables you to securely generate, store, and manage your own cryptographic keys
使您能够安全地生成、存储和管理自己的加密密钥 - Integrate with your application using industry-standard APIs, such as PKCS#11, Java Cryptography Extensions (JCE), and Microsoft CryptoNG (CNG) libraries.
使用行业标准 API(如 PKCS#11、Java Cryptography Extensions (JCE) 和 Microsoft CryptoNG (CNG) 库)与您的应用程序集成。 - Use case: Use KMS to create a CMKs in a custom key store and store non-extractable key material in AWS CloudHSM to get a full control on encryption keys
用例:使用 KMS 在自定义密钥存储中创建 CMK,并将不可提取的密钥材料存储在 AWS CloudHSM 中,以完全控制加密密钥
AWS Systems Manager
- Parameter Store is centralized secrets and configuration data management e.g. passwords, database details, and license code
Parameter Store 是一个集中的密钥和配置数据管理服务,例如密码、数据库详细信息和许可证代码- Parameter value can be type String (plain text), StringList (comma separated) or SecureString (KMS encrypted data)
参数值可以是 String(纯文本)、StringList(逗号分隔)或 SecureString(KMS 加密数据)类型 - Use case: Centralized configuration for dev/uat/prod environment to be used by CLI, SDK, and Lambda function
用例:为 CLI、SDK 和 Lambda 函数使用的 dev/uat/prod 环境提供集中式配置
- Parameter value can be type String (plain text), StringList (comma separated) or SecureString (KMS encrypted data)
- Run Command allows you to automate common administrative tasks and perform one-time configuration changes on EC2 instances at scale
Run Command 允许您大规模自动化常见管理任务并在 EC2 实例上执行一次性配置更改 - Session Manager replaces the need for Bastions to access instances in private subnet
Session Manager 取代了访问私有子网中实例所需的堡垒机
AWS Secrets Manager
- Secret Manager is mainly used to store, manage, and rotate secrets (passwords) such as database credentials, API keys, and OAuth tokens.
Secrets Manager 主要用于存储、管理和轮换数据库凭证、API 密钥和 OAuth 令牌等密钥(密码)。 - Secret Manager has native support to rotate database credentials of RDS databases - MySQL, PostgreSQL and Amazon Aurora
Secrets Manager 原生支持轮换 RDS 数据库(MySQL、PostgreSQL 和 Amazon Aurora)的数据库凭证 - For other secrets such as API keys or tokens, you need to use the lambda for customized rotation function
对于其他密钥(如 API 密钥或令牌),您需要使用 Lambda 函数来实现自定义轮换功能
AWS Shield
- AWS managed Distributed Denial of Service (DDoS) protection service
AWS 托管的分布式拒绝服务 (DDoS) 防护服务 - Protect against Layer 3 and 4 (Network and Transport) attacks
防御第 3 层和第 4 层(网络和传输)攻击 - AWS Shield Standard is automatic and free DDoS protection service for all AWS customers for CloudFront and Route 53 resources
AWS Shield Standard 是所有 AWS 客户均可免费使用的自动 DDoS 防护服务,适用于 CloudFront 和 Route 53 资源 - AWS Shield Advanced is paid service for enhanced DDoS protection for EC2, ELB, CloudFront, and Route 53 resources
AWS Shield Advanced 是一项付费服务,可为 EC2、ELB、CloudFront 和 Route 53 资源提供增强的 DDoS 防护
AWS WAF
- Web Application Firewall protects web applications against common web exploits
Web Application Firewall 可保护 Web 应用程序免受常见的 Web 漏洞攻击 - Protect against Layer 7 (HTTP) attack and block common attack patterns, such as SQL injection or Cross-site scripting (XSS)
防御第 7 层(HTTP)攻击,并阻止常见的攻击模式,例如 SQL 注入或跨站脚本(XSS) - You can deploy WAF on CloudFront, Application Load Balancer, API Gateway and AWS AppSync
您可以将 WAF 部署在 CloudFront、Application Load Balancer、API Gateway 和 AWS AppSync 上
AWS Firewall Manager
- Use AWS Firewall Manager to centrally configure and manage AWS WAF rules, AWS Shield Advanced, Network Firewall rules, and Route 53 DNS Firewall Rules across accounts and resources in AWS Organization
使用 AWS Firewall Manager 集中配置和管理 AWS WAF 规则、AWS Shield Advanced、Network Firewall 规则以及 Route 53 DNS 防火墙规则,覆盖 AWS Organization 中的账户和资源 - Use case: Meet Gov regulations to deploy AWS WAF rule to block traffic from embargoed countries across accounts and resources
用例:满足政府法规要求,跨账户和资源部署 AWS WAF 规则以阻止来自受禁运国家/地区的流量
AWS GuardDuty
- Read VPC Flow Logs, DNS Logs, and CloudTrail events. Apply machine learning algorithms and anomaly detections to discover threats
读取 VPC 流日志、DNS 日志和 CloudTrail 事件。应用机器学习算法和异常检测来发现威胁 - Can protect against CryptoCurrency attacks
可防御加密货币攻击
Amazon Inspector
- Automated Security Assessment service for EC2 instances by installing an agent in the OS of EC2 instance.
通过在 EC2 实例的操作系统中安装代理,为 EC2 实例提供自动化安全评估服务。 - Inspector comes with pre-defined rules packages:-
Inspector 附带预定义的规则包:- Network Reachability rules package checks for unintended network accessibility of EC2 instances
网络可达性规则包检查 EC2 实例是否存在意外的网络可访问性。 - Host Assessment rules package checks for vulnerabilities and insecure configurations on EC2 instance. Includes Common Vulnerabilities and Exposures (CVE), Center for Internet Security (CIS) Operating System configuration benchmarks, and security best practices.
主机评估规则包检查 EC2 实例中的漏洞和不安全配置。包括通用漏洞披露 (CVE)、互联网安全中心 (CIS) 操作系统配置基准以及安全最佳实践。
- Network Reachability rules package checks for unintended network accessibility of EC2 instances
Amazon Macie
- Managed service to discover and protect your sensitive data in AWS
一项托管服务,用于在 AWS 中发现和保护您的敏感数据 - Macie identify and alert for sensitive data, such as Personally Identifiable Information (PII) in your selected S3 buckets
Macie 可识别您选定的 S3 存储桶中的敏感数据(例如个人身份信息 (PII)),并发出警报
AWS Config
- Managed service to assess, audit, and evaluate configurations of your AWS resources in multi-region, multi-account
用于评估、审计和检查您在多区域、多账户中 AWS 资源配置的托管服务 - You are notified via SNS for any configuration change
通过 SNS 通知您任何配置更改 - Integrated with CloudTrail, provide resource configuration history
与 CloudTrail 集成,提供资源配置历史记录 - Use case: Customers need to comply with standards like PCI-DSS (Payment Card Industry Data Security Standard) or HIPAA (U.S. Health Insurance Portability and Accountability Act) can use this service to assess compliance of AWS infra configurations
用例:需要遵守 PCI-DSS(支付卡行业数据安全标准)或 HIPAA(美国健康保险流通与责任法案)等标准的客户可以使用此服务来评估 AWS 基础设施配置的合规性
Compute 计算
EC2 (Elastic Compute Cloud)
EC2 (弹性计算云)
- Infrastructure as a Service (IaaS) - virtual machine on the cloud
基础设施即服务 (IaaS) - 云上的虚拟机 - You must provision nitro-based EC2 instance to achieve 64000 EBS IOPS. Max 32000 EBS IOPS with Non-Nitro EC2.
您必须配置基于 Nitro 的 EC2 实例才能实现 64000 EBS IOPS。非 Nitro EC2 的最大 EBS IOPS 为 32000。 - When you restart an EC2 instance, its public IP can change. Use Elastic IP to assign a fixed public IPv4 to your EC2 instance. By default, all AWS accounts are limited to five (5) Elastic IP addresses per Region.
当您重启 EC2 实例时,其公共 IP 可能会发生变化。使用弹性 IP 为您的 EC2 实例分配固定的公共 IPv4 地址。默认情况下,所有 AWS 账户每个区域最多只能拥有五个 (5) 个弹性 IP 地址。 - Get EC2 instance metadata such as private & public IP from
http://169.254.169.254/latest/meta-dataand user-defined data fromhttp://169.254.169.254/latest/user-data
从http://169.254.169.254/latest/meta-data获取 EC2 实例元数据,例如私有 IP 和公有 IP,以及从http://169.254.169.254/latest/user-data获取用户定义数据 - Place all the EC2 instances in same AZ to reduce the data transfer cost
将所有 EC2 实例放置在同一可用区 (AZ) 以降低数据传输成本 - EC2 Hibernate saves the contents of instance memory (RAM) to the Amazon EBS root volume. When the instance restarts, the RAM contents are reloaded, brings it to last running state, also known as pre-warm the instance. You can hibernate an instance only if it’s enabled for hibernation and it meets the hibernation prerequisites
EC2 休眠会将实例内存 (RAM) 的内容保存到 Amazon EBS 根卷。当实例重启时,RAM 内容会被重新加载,使其恢复到上次运行状态,这也称为预热实例。只有在实例启用了休眠并且满足休眠先决条件的情况下,才能休眠实例 - Use VM Import/Export to import virtual machine image and convert to Amazon EC2 AMI to launch EC2 instances
使用 VM Import/Export 导入虚拟机映像并将其转换为 Amazon EC2 AMI 以启动 EC2 实例
EC2 Instance Types EC2 实例类型
You can choose EC2 instance type based on requirement for e.g. m5.2xlarge has Linux OS, 8 vCPU, 32GB RAM, EBS-Only Storage, Up to 10 Gbps Network bandwidth, Up to 4,750 Mbps IO Operations.
您可以根据需求选择 EC2 实例类型,例如 m5.2xlarge 具有 Linux 操作系统、8 个 vCPU、32GB RAM、EBS 优化存储、高达 10 Gbps 的网络带宽、高达 4,750 Mbps 的 IOPS。
| Instance Class 实例类别 | Usage Type 使用类型 | Usage Example 用法示例 |
|---|---|---|
| T, M T,M | General Purpose 通用 | Web Server, Code Repo, Microservice, Small Database, Virtual Desktop, Dev Environment Web 服务器、代码仓库、微服务、小型数据库、虚拟桌面、开发环境 |
| C | Compute Optimized 计算优化型 | High Performance Computing (HPC), Batch Processing, Gaming Server, Scientific Modelling, CPU-based machine learning 高性能计算 (HPC)、批处理、游戏服务器、科学建模、基于 CPU 的机器学习 |
| R, X, Z R、X、Z | Memory Optimized 内存优化型 | In-memory Cache, High Performance Database, Real-time big data analytics 内存缓存、高性能数据库、实时大数据分析 |
| F, G, P F、G、P | Accelerated Computing 加速计算 | High GPU, Graphics Intensive Applications, Machine Learning, Speech Recognition 高 GPU、图形密集型应用、机器学习、语音识别 |
| D, H, I D、H、I | Storage Optimized 存储优化型 | EC2 Instance Storage, High I/O Performance, HDFS, MapReduce File Systems, Spark, Hadoop, Redshift, Kafka, Elastic Search EC2 实例存储、高 I/O 性能、HDFS、MapReduce 文件系统、Spark、Hadoop、Redshift、Kafka、Elastic Search |
EC2 Launch Types EC2 启动类型
- On-Demand - pay as you use, pay per hour, costly
按需付费 - 按使用量付费,按小时付费,成本较高 - Reserved - up-front payment and reserve for 1 year or 3 year, two classes:-
预留实例 - 预付费用并预留 1 年或 3 年,分为两类:- Standard unused instanced can be sold in AWS reserved instance marketplace
标准预留实例可以在 AWS 预留实例市场出售 - Convertible can be exchanged for another Convertible Reserved Instance with different instance attributes
可转换预留实例可以换成具有不同实例属性的另一个可转换预留实例
- Standard unused instanced can be sold in AWS reserved instance marketplace
- Scheduled Reserved Instances - reserve capacity that is scheduled to recur daily, weekly, or monthly, with a specified start time and duration, for a one-year term. After you complete your purchase, the instances are available to launch during the time windows that you specified.
预留实例 - 预留容量,该容量计划每天、每周或每月重复一次,具有指定的开始时间和持续时间,有效期为一年。完成购买后,您可以在指定的时间窗口内启动实例。 - Spot Instances - up-to 90% discount, cheapest useful for applications with flexible in timing, can handle interruptions and recover gracefully.
Spot 实例 - 最高可享受 90% 的折扣,最适合时间灵活的应用程序,可以处理中断并优雅地恢复。- Spot blocks can also be launched with a required duration, which are not interrupted due to changes in the Spot price
Spot 块也可以指定所需持续时间进行启动,不会因 Spot 价格变动而中断。 - Spot Fleet is a collection, or fleet, of Spot Instances, and optionally On-Demand Instances, which attempts to launch the number of Spot and On-Demand Instances to meet the specified target capacity
Spot Fleet 是 Spot 实例的集合或集群,还可以选择性地包含按需实例,它会尝试启动指定数量的 Spot 实例和按需实例以满足指定的总容量目标。
- Spot blocks can also be launched with a required duration, which are not interrupted due to changes in the Spot price
- Dedicated Instance - Your instance runs on dedicated hardware provides physical isolation, single-tenant
专用实例 - 您的实例运行在专用的硬件上,提供物理隔离,单租户 - Dedicated Hosts - Your instances run on a dedicated physical server. More visibility of how instances are placed on server. Let you use existing server-bound software licenses and address corporate compliance and regulatory requirements.
专用主机 - 您的实例运行在专用的物理服务器上。可以更清楚地了解实例在服务器上的放置情况。允许您使用现有的绑定服务器的软件许可证,并满足公司合规性和监管要求。
You have a limit of 20 Reserved instances, 1152 vCPU On-demand standard instances, and 1440 vCPU spot instances. You can increase the limit by submitting the EC2 limit increase request form.
您最多可以拥有 20 个预留实例、1152 个按需标准实例 vCPU 和 1440 个竞价实例 vCPU。您可以通过提交 EC2 限制增加请求表来提高限制。
EC2 Enhanced Networking EC2 增强联网
- Elastic Network Interface (ENI) is a virtual network card, which you attach to EC2 instance in same AZ. ENI has one primary private IPv4, one or more secondary private IPv4, one Elastic IP per private IPv4, one public IPv4, one or more IPv6, one or more security groups, a MAC address and a source/destination check flag
弹性网络接口 (ENI) 是一种虚拟网卡,您将其附加到同一可用区 (AZ) 中的 EC2 实例。ENI 具有一个主私有 IPv4 地址、一个或多个辅助私有 IPv4 地址、每个私有 IPv4 地址一个弹性 IP 地址、一个公有 IPv4 地址、一个或多个 IPv6 地址、一个或多个安全组、一个 MAC 地址以及一个源/目标检查标志。- While primary ENI cannot be detached from an EC2 instance, A secondary ENI with private IPv4 can be detached and attached to a standby EC2 instance if primary EC2 becomes unreachable (failover)
主 ENI 不能从 EC2 实例分离,但如果主 EC2 实例变得无法访问(故障转移),则可以分离具有私有 IPv4 地址的辅助 ENI 并将其附加到备用 EC2 实例。
- While primary ENI cannot be detached from an EC2 instance, A secondary ENI with private IPv4 can be detached and attached to a standby EC2 instance if primary EC2 becomes unreachable (failover)
- Elastic Network Adapter (ENA) for C4, D2, and M4 EC2 instances, Upto 100 Gbps network speed.
适用于 C4、D2 和 M4 EC2 实例的弹性网络适配器 (ENA),网络速度高达 100 Gbps。 - Elastic Fabric Adapter (EFA) is ENA with additional OS-bypass functionality, which enables HPC and Machine Learning applications to bypass the operating system kernel and communicate directly with EFA device resulting in very high performance and low latency. for M5, C5, R5, I3, G4, metal EC2 instances.
弹性结构适配器 (EFA) 是带有附加操作系统旁路功能的 ENA,它使高性能计算 (HPC) 和机器学习应用程序能够绕过操作系统内核并直接与 EFA 设备通信,从而实现极高的性能和低延迟。适用于 M5、C5、R5、I3、G4、metal EC2 实例。 - Intel 82599 Virtual Function (VF) Interface for C3, C4, D2, I2, M4, and R3 EC2 instances, Upto 10 Gbps network speed.
适用于 C3、C4、D2、I2、M4 和 R3 EC2 实例的 Intel 82599 虚拟功能 (VF) 接口,网络速度高达 10 Gbps。
EC2 Placement Groups Strategy
EC2 放置组策略
Placement groups can span across AZs only, cannot span across regions
放置组只能跨越可用区 (AZ),不能跨越区域
- Cluster - Same AZ, Same Rack, Low latency and High Network, High-Performance Computing (HPC)
集群 - 同一可用区、同一机架、低延迟和高网络、高性能计算 (HPC) - Spread - Different AZ, Distinct Rack, High Availability, Critical Applications, Limited to 7 instances per AZ per placement group.
Spread - 不同可用区、不同机架、高可用性、关键应用程序、每个可用区每个放置组最多限制 7 个实例。 - Partition - Same or Different AZ, Different Rack (or Partition), Distributed Applications like Hadoop, Cassandra, Kafka etc, Upto 7 Partition per AZ
Partition - 相同或不同可用区、不同机架(或分区)、分布式应用程序(如 Hadoop、Cassandra、Kafka 等)、每个可用区最多 7 个分区。
AMI (Amazon Machine Image)
- Customized image of an EC2 instance, having built-in OS, softwares, configurations, etc.
EC2 实例的自定义映像,内置操作系统、软件、配置等。 - You can create an AMI from EC2 instance and launch a new EC2 instance from AMI.
您可以从 EC2 实例创建 AMI,并从 AMI 启动新的 EC2 实例。 - AMI are built for a specific region and can be copied across regions
AMI 是为特定区域构建的,可以跨区域复制。
ELB (Elastic Load Balancing)
- AWS load balancer provides a static DNS name provided for e.g.
http://myalb-123456789.us-east-1.elb.amazonaws.com
AWS 负载均衡器提供了一个静态 DNS 名称,例如http://myalb-123456789.us-east-1.elb.amazonaws.com - AWS load balancer routes the request to Target Groups. Target group can have one or more EC2 instances, IP Addresses or lambda functions.
AWS 负载均衡器将请求路由到目标组。目标组可以包含一个或多个 EC2 实例、IP 地址或 Lambda 函数。 - Three types of ELB - Classic Load Balancer, Application Load Balancer, and Network Load Balancer
ELB 的三种类型 - Classic Load Balancer、Application Load Balancer 和 Network Load Balancer - Application Load Balancer (ALB):
Application Load Balancer (ALB):- Routing based on hostname, request path, params, headers, source IP etc.
基于主机名、请求路径、参数、标头、源 IP 等进行路由。 - Support Request tracing, add
X-Amzn-Trace-Idheader before sending the request to target
支持请求跟踪,在将请求发送到目标之前,在请求前添加X-Amzn-Trace-Id标头 - Client IP and port can be found in
X-Forwarded-ForandX-Forwarded-Portoheader
客户端 IP 和端口可以在X-Forwarded-For和X-Forwarded-Porto标头中找到 - integrate with WAF with rate-limiting (throttle) rules to prevent from DDoS attacks
与 WAF 集成,并设置速率限制(节流)规则以防止 DDoS 攻击
- Routing based on hostname, request path, params, headers, source IP etc.
- Network Load Balancer (NLB):
网络负载均衡器 (NLB):- Handle volatile workloads and extreme low-latency
处理易变的工作负载和极低的延迟 - Provide static IP/Elastic IP for the load balancer per AZ
为每个可用区中的负载均衡器提供静态 IP/弹性 IP - allows registering targets by IP address
允许通过 IP 地址注册目标 - Use NLB with Elastic IP in front of ALBs when there is a requirement of whitelisting ALB
当需要白名单 ALB 时,在 ALB 前面使用带有弹性 IP 的 NLB
- Handle volatile workloads and extreme low-latency
- Stickiness: works in CLB and ALB. Stickiness and its duration can be set at Target Group level. Doesn’t work with NLB
粘性:在 CLB 和 ALB 中均可用。粘性和其持续时间可在目标组级别设置。不适用于 NLB
| ELB Types ELB 类型 | Supported Protocol 支持的协议 |
|---|---|
| Application Load Balancer 应用负载均衡器 |
HTTP, HTTPS, WebSocket HTTP、HTTPS、WebSocket |
| Network Load Balancer 网络负载均衡器 | TCP, UDP, TLS TCP、UDP、TLS |
| Gateway Load Balancer 网关负载均衡器 | Thirdparty appliances 第三方设备 |
| Classic Load Balancer (old) 经典负载均衡器(旧版) |
HTTP, HTTPS, TCP HTTP、HTTPS、TCP |
ASG (Auto Scaling Group)
ASG(自动伸缩组)
- Scale-out (add) or scale-in (remove) EC2 instances based on scaling policy - CPU, Network, Custom metric or Scheduled.
根据伸缩策略(CPU、网络、自定义指标或计划)进行横向伸缩(添加)或纵向伸缩(移除)EC2 实例。 - You configure the size of your Auto Scaling group by setting the minimum, maximum, and desired capacity. ASG runs EC2 instances at desired capacity if no policy specified. Minimum and maximum capacity are boundaries within ASG scale-in or scale-out.
min <= desired <= max
您可以通过设置最小容量、最大容量和期望容量来配置自动伸缩组的大小。如果没有指定策略,ASG 将以期望容量运行 EC2 实例。最小容量和最大容量是 ASG 纵向伸缩或横向伸缩的边界。min <= desired <= max - Instances are created in ASG using Launch Configuration (legacy) or Launch Template (newer)
实例在 ASG 中使用启动配置(旧版)或启动模板(较新版)创建 - You cannot change the launch configuration for an ASG, you must create a new launch configuration and update your ASG with it.
您无法更改 ASG 的启动配置,必须创建新的启动配置并用其更新您的 ASG。 - You can create ASG that launches both Spot and On-Demand Instances or multiple instance types using launch template, not possible with launch configuration.
您可以使用启动模板创建同时启动 Spot 和按需实例或多种实例类型的 ASG,而启动配置无法实现此功能。 - Dynamic Scaling Policy 动态扩展策略
- Target Tracking Scaling - can have more than one policy for e.g. add or remove capacity to keep the average aggregate CPU utilization of your Auto Scaling group at 40% and request count per target of your ALB target group at 1000 for your ASG. If both policies occurs at same time, use largest capacity for both scale-out and scale-in.
目标跟踪扩缩容 - 可以有多个策略,例如,将您的 Auto Scaling 组的平均聚合 CPU 利用率保持在 40%,并将您的 ALB 目标组的每个目标请求数保持在 1000,以增加或减少容量。如果两个策略同时发生,则在扩展和缩减时都使用最大的容量。 - Simple Scaling - e.g. CloudWatch alarm CPUUtilization (>80%) - add 2 instances
简单扩缩容 - 例如,CloudWatch 警报 CPUUtilization (>80%) - 添加 2 个实例 - Step Scaling - e.g. CloudWatch alarm CPUUtilization (60%-80%)- add 1, (>80%) - add 3 more, (30%-40%) - remove 1, (<30%) - remove 2 more
步进扩缩容 - 例如,CloudWatch 警报 CPUUtilization (60%-80%) - 添加 1 个,(>80%) - 再添加 3 个,(30%-40%) - 移除 1 个,(<30%) - 再移除 2 个 - Scheduled Action - e.g. Increase min capacity to 10 at 5pm on Fridays
计划操作 - 例如,在星期五下午 5 点将最小容量增加到 10
- Target Tracking Scaling - can have more than one policy for e.g. add or remove capacity to keep the average aggregate CPU utilization of your Auto Scaling group at 40% and request count per target of your ALB target group at 1000 for your ASG. If both policies occurs at same time, use largest capacity for both scale-out and scale-in.
- Default Termination Policy - Find AZ with most number of instances, and delete the one with oldest launch configuration, in case of tie, the one closest to next billing hour
默认终止策略 - 查找具有最多实例的可用区,然后删除具有最旧启动配置的实例,如果存在平局,则删除最接近下一个计费小时的实例 - Cooldown period is the amount of time to wait for previous scaling activity to take effect. Any scaling activity during cooldown period is ignored.
冷却时间是等待先前扩展活动生效的时间量。在此期间内的任何扩展活动都将被忽略。 - Health check grace period is the amount of wait time to check the health status of EC2 instance, which has just came into service to give enough time to warmup.
运行状况检查宽限期是检查刚投入使用的 EC2 实例运行状况状态的等待时间,以便为其提供足够的预热时间。 - You can add lifecycle-hooks to ASG to perform custom action during:-
您可以向 ASG 添加生命周期挂钩,以便在以下时间执行自定义操作:- scale-out to run script, install softwares and send
complete-lifecycle-actioncommand to continue
扩展以运行脚本、安装软件并发送complete-lifecycle-action命令以继续 - scale-in e.g. download logs, take snapshot before termination
缩减,例如在终止前下载日志、拍摄快照
- scale-out to run script, install softwares and send
Lambda
- FaaS (Function as a Service), Serverless
FaaS(服务功能)、无服务器 - Lambda function supports many languages such as Node.js, Python, Java, C#, Golang, Ruby, etc.
Lambda 函数支持多种语言,例如 Node.js、Python、Java、C#、Golang、Ruby 等。 - Lambda limitations:-
Lambda 限制:-
- execution time can’t exceed 900 seconds or 15 min
执行时间不能超过 900 秒或 15 分钟 - min required memory is 128MB and can go till 10GB with 1-MB increment
所需的最小内存为 128MB,最高可达 10GB,每次增加 1MB /tempdirectory size to download file can’t exceed 512 MB
/temp目录大小下载文件不能超过 512 MB- max environment variables size can be 4KB
最大环境变量大小为 4KB - compressed .zip and uncompressed code can’t exceed 50MB and 250MB respectively
压缩的 .zip 文件和未压缩的代码分别不能超过 50MB 和 250MB
- execution time can’t exceed 900 seconds or 15 min
- Lambda function can be triggered on DynamoDB database trigger, S3 object events, event scheduled from EventBridge (CloudWatch Events), message received from SNS or SQS, etc.
Lambda 函数可以由 DynamoDB 数据库触发器、S3 对象事件、EventBridge(CloudWatch Events)的计划事件、SNS 或 SQS 接收的消息等触发。 - Assign IAM Role to lambda function to give access to AWS resource for e.g. create snapshot of EC2, process image and store in S3, etc.
为 Lambda 函数分配 IAM 角色,以授予其访问 AWS 资源的权限,例如创建 EC2 快照、处理图像并将其存储在 S3 中等。 - Lambda can auto scale in seconds to handle sudden burst of traffic. EC2 require minutes to auto scale.
Lambda 可在几秒钟内自动扩展以处理突发流量。EC2 需要几分钟才能自动扩展。 - You are charged based on number of requests, execution time and resource (memory) usage. Cheaper than EC2.
您的收费基于请求数量、执行时间和资源(内存)使用情况。比 EC2 便宜。 - You can use Lambda@Edge to run code at CloutFront Edge globally
您可以使用 Lambda@Edge 在全球的 CloudFront 边缘运行代码 - You can optionally setup a dead-letter queue (DLQ) with SQS or SNS to forward unprocessed or failed requests payload
您可以选择使用 SQS 或 SNS 设置死信队列 (DLQ),以转发未处理或失败的请求负载 - You can enable and watch the lambda execution logs in CloudWatch
您可以在 CloudWatch 中启用并查看 Lambda 执行日志
Application Integration 应用程序集成
SQS (Amazon Simple Queue Service)
- Fully managed service with following specifications for Standard SQS:-
完全托管的服务,标准 SQS 具有以下规格:- can have unlimited number of messages waiting in queue
队列中可以有无限制数量的消息等待 - default retention period is 4 days and max 14 days
默认保留期为 4 天,最长为 14 天 - can send message upto 256KB in size
消息大小可达 256KB - unlimited throughput and low latency (<10ms on publish and receive)
吞吐量无限,延迟低(发布和接收时 <10ms) - can have duplicate messages (At least once delivery)
可能存在重复消息(至少一次投递) - can have out-of-order messages (best-effort ordering)
可能存在乱序消息(尽力而为排序)
- can have unlimited number of messages waiting in queue
- Consumer (can be EC2 instance or lambda function) poll the messages in batches (upto 10 messages) and delete them from queue after processing. If don’t delete, they stay in Queue and may process multiple times.
消费者(可以是 EC2 实例或 Lambda 函数)会批量(最多 10 条消息)轮询消息,并在处理后从队列中删除它们。如果不删除,它们将保留在队列中,可能会被处理多次。 - You should allow Producer and Consumer to send and receive messages from SQS Queue Access Policy
您应该允许生产者和消费者通过 SQS 队列访问策略发送和接收消息。 - Message Visibility Timeout when a message is polled by a consumer, it becomes invisible to other consumers for timeout period.
消息可见性超时当消息被消费者轮询时,在超时期间内,其他消费者将无法看到该消息。 - You can setup a Dead-letter queue (DLQ) which is another SQS to keep the messages which are failed to process by consumers multiple times and exceed the Maximum receives threshold in SQS.
您可以设置一个死信队列(DLQ),它是另一个 SQS,用于存放那些被消费者多次处理失败且超过 SQS 中最大接收次数阈值的消息。 - You use SQS Temporary Queue Client to implement SQS Request-Response System.
您可以使用 SQS 临时队列客户端来实现 SQS 请求-响应系统。 - You can delay message (consumers don’t see them immediately) up to 15 minutes (default 0 seconds). You can do it using Delivery Delay configuration at queue level or DelaySeconds parameter at message level.
您可以延迟消息(消费者不会立即看到它们),最长可达 15 分钟(默认 0 秒)。您可以通过队列级别的“延迟队列”配置或消息级别的 `DelaySeconds` 参数来实现。 - Long polling is when the
ReceiveMessageWaitTimeSecondsproperty of a queue is set to a value greater than zero. Long polling reduces the number of empty responses by allowing Amazon SQS to wait until a message is available before sending a response to a ReceiveMessage request, helps to reduce the cost.
长轮询是指队列的 `ReceiveMessageWaitTimeSeconds` 属性设置为大于零的值。长轮询允许 Amazon SQS 等待消息可用后再响应 `ReceiveMessage` 请求,从而减少空响应的数量,有助于降低成本。 - You can create SQS of type FIFO which guarantee ordering and exactly once processing with limited throughput upto 300 msg/s without and 3000 msg/s with batching. FIFO queue name must end with suffix .fifo. You can not convert Standard SQS to FIFO SQS.
您可以创建 FIFO 类型的 SQS,它保证顺序和仅一次处理,吞吐量限制为无批处理时最高 300 条消息/秒,有批处理时最高 3000 条消息/秒。FIFO 队列名称必须以 `.fifo` 后缀结尾。您不能将标准 SQS 转换为 FIFO SQS。 - Use case: Cloudwatch has custom metric on =(SQS queue length/Number of EC2 instances), which alarm ASG to auto scale EC2 instances (SQS consumer) based on number of messages in queue.
用例:Cloudwatch 有一个自定义指标(=(SQS 队列长度/EC2 实例数)),该指标根据队列中的消息数量触发 ASG 自动扩展 EC2 实例(SQS 消费者)。
SNS (Amazon Simple Notification Service)
- PubSub model, where publisher sends the messages on SNS topic and all topic subscribers receive those messages.
发布/订阅模型,发布者将消息发送到 SNS 主题,所有主题订阅者都会收到这些消息。 - Upto 100,000 topics and Upto 12,500,000 subscription per topic
最多 100,000 个主题,每个主题最多 12,500,000 个订阅 - Subscribers can be: Kinesis Data Firehose, SQS, HTTP, HTTPS, Lambda, Email, Email-JSON, SMS Messages, Mobile Notifications.
订阅者可以是:Kinesis Data Firehose、SQS、HTTP、HTTPS、Lambda、Email、Email-JSON、SMS Messages、Mobile Notifications。 - You can setup a Subscription Filter Policy which is JSON policy to send the filtered messages to specific subscribers.
您可以设置一个订阅筛选策略,这是一个 JSON 策略,用于将筛选后的消息发送到特定的订阅者。 - Fan out pattern: SNS topic has multiple SQS subscribers e.g. send all order messages to SNS topic and then send filtered messages based on order status to 3 different application services using SQS.
扇出模式:SNS 主题有多个 SQS 订阅者,例如将所有订单消息发送到 SNS 主题,然后根据订单状态将筛选后的消息通过 SQS 发送给 3 个不同的应用程序服务。
Amazon MQ
- Amazon managed Apache ActiveMQ
Amazon 托管的 Apache ActiveMQ - Migrate an existing message broker using MQTT protocol to AWS.
将现有的消息代理迁移到 AWS,使用 MQTT 协议。
Storage 存储
S3 (Simple Storage Service)
- S3 Bucket is an object-based storage, used to manage data as objects
S3 存储桶是基于对象的存储,用于将数据作为对象进行管理 - S3 Object is having:-
S3 对象包含:-- Value - data bytes of object (photos, videos, documents, etc.)
值 - 对象的数据字节(照片、视频、文档等) - Key - full path of the object in bucket e.g.
/movies/comedy/abc.avi
键 - 存储桶中对象的完整路径,例如/movies/comedy/abc.avi - Version ID - version object, if versioning is enabled
版本 ID - 对象的版本,如果启用了版本控制 - Metadata - additional information
元数据 - 附加信息
- Value - data bytes of object (photos, videos, documents, etc.)
- S3 Bucket holds objects. S3 console shows virtual folders based on key.
S3 存储桶用于存储对象。S3 控制台根据键显示虚拟文件夹。 - S3 is a universal namespace so bucket names must be globally unique (think like having a domain name)
S3 是一个全局命名空间,因此存储桶名称必须全局唯一(可以将其想象成域名)。https://<bucket-name>.s3.<aws-region>.amazonaws.com or https://s3.<aws-region>.amazonaws.com/<bucket-name> - Unlimited Storage, Unlimited Objects from 0 Bytes to 5 Terabytes in size. You should use multi-part upload for Object size > 100MB
无限存储,无限对象,大小从 0 字节到 5 TB。对于大于 100MB 的对象,您应该使用分段上传。 - All new buckets are private when created by default. You should enable public access explicitly.
默认情况下,所有新创建的存储桶都是私有的。您应该显式启用公共访问。 - Access control can be configured using Access Control List (ACL) (deprecated) and S3 Bucket Policies (recommended)
可以通过访问控制列表 (ACL)(已弃用)和 S3 存储桶策略(推荐)来配置访问控制。 - S3 Bucket Policies are JSON based policy for complex access rules at user, account, folder, and object level
S3 存储桶策略是基于 JSON 的策略,用于在用户、账户、文件夹和对象级别设置复杂的访问规则。 - Enable S3 Versioning and MFA delete features to protect against accidental delete of S3 Object.
启用 S3 版本控制和 MFA 删除功能,以防止意外删除 S3 对象。 - Use Object Lock to store object using write-once-read-many (WORM) model to prevent objects from being deleted or overwritten for a fixed amount of time (Retention period) or indefinitely (Legal hold). Each version of object can have different retention-period.
使用对象锁定以写入一次、读取多次 (WORM) 模型存储对象,以防止对象在固定时间内(保留期)或无限期内(法律保留)被删除或覆盖。对象的每个版本都可以有不同的保留期。 - You can host static websites on S3 bucket consisting of HTML, CSS, client-side JavaScript, and images. You need to enable Static website hosting and Public access for S3 to avoid 403 forbidden error. Also you need to add CORS Policy to allow cross-origin request.
您可以在由 HTML、CSS、客户端 JavaScript 和图像组成的 S3 存储桶上托管静态网站。您需要为 S3 启用静态网站托管和公共访问,以避免 403 禁止访问错误。您还需要添加 CORS 策略以允许跨源请求。https://<bucket-name>.s3-website[.-]<aws-region>.amazonaws.com - Generate a pre-signed URL from CLI or SDK (can’t from the web) to provide temporary access to an S3 object to either upload or download object data. You specify expiry (say 5 sec) while generating url:-
您可以通过 CLI 或 SDK(无法通过 Web)生成预签名 URL,为 S3 对象提供临时访问权限,以便上传或下载对象数据。在生成 URL 时,您可以指定过期时间(例如 5 秒):aws s3 presign s3://mybucket/myobject --expires-in 300 - S3 Select or Glacier Select can be used to query subset of data from S3 Objects using SQL query. S3 Objects can be CSV, JSON, or Apache Parquet. GZIP & BZIP2 compression is supported with CSV or JSON format with server-side encryption.
S3 Select 或 Glacier Select 可用于使用 SQL 查询从 S3 对象中查询数据子集。S3 对象可以是 CSV、JSON 或 Apache Parquet。对于 CSV 或 JSON 格式,支持 GZIP 和 BZIP2 压缩,并支持服务器端加密。 - using
RangeHTTP Header in a GET Request to download the specific range of bytes of S3 object, known as Byte Range Fetch
在 GET 请求中使用 `Range` HTTP 标头下载 S3 对象的特定字节范围,称为字节范围获取 - You can create S3 event notification to push events e.g.
s3:ObjectCreated:*to SNS topic, SQS queue or execute a Lambda function. It is possible that you receive single notification for two writes to a non-versioned object at the same time. Enable versioning to ensure you get all notifications.
您可以创建 S3 事件通知,将事件(例如s3:ObjectCreated:*)推送到 SNS 主题、SQS 队列或执行 Lambda 函数。对于同时写入非版本控制对象的两次写入,您可能会收到单个通知。启用版本控制可确保您收到所有通知。 - Enable S3 Cross-Region Replication for asynchronous replication of object across buckets in another region. You must have versioning enabled on both source and destination side. Only new S3 Objects are replicated after you enable them.
为跨区域存储桶启用 S3 跨区域复制,以异步复制对象。您必须在源端和目标端都启用版本控制。启用后,只有新的 S3 对象会被复制。 - Enable Server access logging for logging object-level fields object-size, total time, turn around time, and HTTP referrer. Not available with CloudTrail.
启用服务器访问日志记录,用于记录对象级别的字段,如对象大小、总时间、周转时间和 HTTP 引用者。CloudTrail 不提供此功能。 - Use VPC S3 gateway endpoint to access S3 bucket within AWS VPC to reduce the overall data transfer cost.
使用 VPC S3 网关端点在 AWS VPC 内访问 S3 存储桶,以降低总体数据传输成本。 - Enable S3 Transfer Acceleration for faster transfer and high throughput to S3 bucket (mainly uploads), Create CloudFront distribution with OAI pointing to S3 for faster-cached content delivery (mainly reads)
启用 S3 传输加速,以实现到 S3 存储桶的更快传输和高吞吐量(主要用于上传);创建带有指向 S3 的 OAI 的 CloudFront 分发,以实现更快的缓存内容交付(主要用于读取)。 - Restrict the access of S3 bucket through CloudFront only using Origin Access Identity (OAI). Make sure user can’t use a direct URL to the S3 bucket to access the file.
仅通过源访问身份 (OAI) 限制对 CloudFront 的 S3 存储桶的访问。确保用户无法使用 S3 存储桶的直接 URL 来访问文件。
S3 Storage Class Types
S3 存储类别类型
- Standard: Costly choice for very high availability, high durability and fast retrieval
Standard:成本较高,但提供极高的可用性、高持久性和快速检索 - Intelligent Tiering: Uses ML to analyze your Object’s usage and move to the appropriate cost-effective storage class automatically
Intelligent Tiering:使用机器学习分析对象的用法,并自动将其移动到合适的、具有成本效益的存储类别 - Standard-IA: Cost-effective for infrequent access files which cannot be recreated
Standard-IA:对于无法重新创建的访问频率较低的文件,具有成本效益 - One-Zone IA: Cost-effective for infrequent access files which can be recreated
One-Zone IA:对于可以重新创建的访问频率较低的文件,具有成本效益 - Glacier: Cheaper choice to Archive Data. You must purchase Provisioned capacity, when you require guaranteed Expedite retrievals.
Glacier:归档数据的更便宜选择。当您需要保证的 Expedite 检索时,必须购买 Provisioned capacity。 - Glacier Deep Archive: Cheapest choice for Long-term storage of large amount of data for compliance
Glacier Deep Archive:用于合规性的大量数据的长期存储的最便宜选择
| S3 Storage Class S3 存储类别 | Durability 持久性 | Availability 可用性 | AZ | Min. Storage 最小存储 | Retrieval Time 检索时间 | Retrieval fee 检索费用 |
|---|---|---|---|---|---|---|
| S3 Standard (General Purpose) S3 Standard(通用型) |
11 9’s 11 个 9 | 99.99% | ≥3 | N/A 不适用 | milliseconds 毫秒 | N/A 不适用 |
| S3 Intelligent Tiering S3 智能分层 | 11 9’s 11 个 9 | 99.9% | ≥3 | 30 days 30 天 | milliseconds 毫秒 | N/A 不适用 |
| S3 Standard-IA (Infrequent Access) S3 标准-IA(不频繁访问) |
11 9’s 11 个 9 | 99.9% | ≥3 | 30 days 30 天 | milliseconds 毫秒 | per GB 每 GB |
| S3 One Zone-IA (Infrequent Access) S3 单可用区-IA(不频繁访问) |
11 9’s 11 个 9 | 99.5% | 1 | 30 days 30 天 | milliseconds 毫秒 | per GB 每 GB |
| S3 Glacier | 11 9’s 11 个 9 | 99.99% | ≥3 | 90 days 90 天 | Expedite (1-5 mins) 加急(1-5 分钟) Standard (3-5 hrs) 标准(3-5 小时) Bulk (5-12 hrs) 批量(5-12 小时) |
per GB 每 GB |
| S3 Glacier Deep Archive | 11 9’s 11 个 9 | 99.99% | ≥3 | 180 days 180 天 | Standard (12 hrs) 标准(12 小时) Bulk (48 hrs) 批量(48 小时) |
per GB 每 GB |
- You can upload files in the same bucket with different Storage Classes like S3 standard, Standard-IA, One Zone-IA, Glacier etc.
您可以在同一个存储桶中上传具有不同存储类别的对象,例如 S3 标准、标准-IA、One Zone-IA、Glacier 等。 - You can setup S3 Lifecycle Rules to transition current (or previous version) objects to cheaper storage classes or delete (expire if versioned) objects after certain days e.g.
您可以设置 S3 生命周期规则,将当前(或先前版本)对象转换到更便宜的存储类别,或在特定天数后删除(如果已版本化则过期)对象,例如:- transition from S3 Standard to S3 Standard-IA or One Zone-IA can only be done after 30 days.
从 S3 Standard 转换到 S3 Standard-IA 或 One Zone-IA 必须在 30 天后才能进行。 - transition from S3 Standard to S3 Intelligent Tiering, Glacier, or Glacier Deep Archive can be done immediately.
从 S3 Standard 转换到 S3 Intelligent Tiering、Glacier 或 Glacier Deep Archive 可以立即进行。
- transition from S3 Standard to S3 Standard-IA or One Zone-IA can only be done after 30 days.
- You can also setup lifecycle rule to abort multipart upload, if it doesn’t complete within certain days, which auto delete the parts from S3 buckets associated with multipart upload.
您还可以设置生命周期规则,在多部分上传未在特定天数内完成时中止上传,这将自动删除与多部分上传关联的 S3 存储桶中的部分。
Encryption 加密
- Encryption is transit between client and S3 is achieved via SSL/TLS
客户端与 S3 之间传输中的加密通过 SSL/TLS 实现。 - You can add default encryption at bucket level and also override encryption at file level.
您可以在存储桶级别添加默认加密,也可以在文件级别覆盖加密。 - Encryption at rest - Server Side Encryption (SSE)
静态加密 - 服务器端加密 (SSE)- SSE-S3 AWS S3 managed keys, use AES-256 algorithm. Must set header:
"x-amz-server-side-encryption":"AES-256"
SSE-S3 AWS S3 管理的密钥,使用 AES-256 算法。必须设置标头:"x-amz-server-side-encryption":"AES-256" - SSE-KMS Envelope Encryption using AWS KMS managed keys. Must set header:
"x-amz-server-side-encryption":"aws:kms"
SSE-KMS 使用 AWS KMS 管理的密钥进行信封加密。必须设置标头:"x-amz-server-side-encryption":"aws:kms" - SSE-C Customer provides and manage keys. HTTPS is mandatory.
SSE-C 由客户提供和管理的密钥。必须使用 HTTPS。
- SSE-S3 AWS S3 managed keys, use AES-256 algorithm. Must set header:
- Encryption at rest - Client Side Encryption client encrypts and decrypts the data before sending and after receiving data from S3.
静态加密 - 客户端加密 客户端在发送数据到 S3 之前和从 S3 接收数据之后对数据进行加密和解密。 - To meet PCI-DSS or HIPAA compliance, encrypt S3 using SSE-C and Client Side Encryption
为满足 PCI-DSS 或 HIPAA 合规性要求,请使用 SSE-C 和客户端加密来加密 S3
Data Consistency 数据一致性
- S3 provides strong read-after-write consistency for PUTs and DELETEs of objects. PUTs applies to both writes to new objects as well as overwrite existing objects.
S3 为对象的 PUT 和 DELETE 操作提供强读写一致性。PUT 操作既适用于新对象的写入,也适用于覆盖现有对象。 - Updates to a single key are atomic. For example, if you PUT to an existing key from one thread and perform a GET on the same key from a second thread concurrently, you will get either the old data or the new data, but never partial or corrupt data.
对单个键的更新是原子性的。例如,如果您在一个线程中 PUT 一个现有键,并在同一时间从第二个线程对同一键执行 GET 操作,您将获得旧数据或新数据,但绝不会获得部分或损坏的数据。
AWS Athena
- You can use AWS Athena (Serverless Query Engine) to perform analytics directly against S3 objects using SQL query and save the analysis report in another S3 bucket.
您可以使用 AWS Athena(无服务器查询引擎)直接对 S3 对象执行 SQL 查询分析,并将分析报告保存在另一个 S3 存储桶中。 - Use Case: one-time SQL query on S3 objects, S3 access log analysis, serverless queries on S3, IoT data analytics in S3, etc.
用例:对 S3 对象进行一次性 SQL 查询、S3 访问日志分析、对 S3 进行无服务器查询、S3 中的 IoT 数据分析等。
Instance Store 实例存储
- Instance Store is temporary block-based storage physically attached to an EC2 instance
实例存储是物理附加到 EC2 实例的临时块级存储 - Can be attached to an EC2 instance only when the instance is launched and cannot be dynamically resized
只能在启动实例时附加到 EC2 实例,并且无法动态调整大小 - Also known as Ephemeral Storage
也称为临时存储 - Deliver very low-latency and high random I/O performance
提供非常低的延迟和高随机 I/O 性能 - Data persists on instance reboot, data doesn’t persist on stop or termination
实例重启时数据会持久化,停止或终止时数据不会持久化
EBS (Elastic Block Store)
- EBS is block-based storage, referred as EBS Volume
EBS 是基于块的存储,称为 EBS 卷 - EBS Volume think like a USB stick
EBS 卷可以理解为 U 盘- Can be attached to only one EC2 instance at a time. Can be detached & attached to another EC2 instance in that same AZ only
一次只能附加到一个 EC2 实例。可以分离并附加到同一可用区中的另一个 EC2 实例 - Can attach multiple EBS volumes to single EC2 instance. Data persist after detaching from EC2
可以将多个 EBS 卷附加到单个 EC2 实例。数据在从 EC2 分离后仍然存在
- Can be attached to only one EC2 instance at a time. Can be detached & attached to another EC2 instance in that same AZ only
- EBS Snapshot is a backup of EBS Volume at a point in time. You can not copy EBS volume across AZ but you can create EBS Volume from Snapshot across AZ. EBS Snapshot can copy across AWS Regions.
EBS 快照是 EBS 卷在某个时间点的备份。您不能跨可用区复制 EBS 卷,但可以从快照跨可用区创建 EBS 卷。EBS 快照可以跨 AWS 区域复制。 - Facts about EBS Volume encryption:-
关于 EBS 卷加密的事实:-- All data at rest inside the volume is encrypted
卷中所有静态数据均已加密 - All data in flight between the volume and EC2 instance is encrypted
卷与 EC2 实例之间传输的所有数据均已加密 - All snapshots of encrypted volumes are automatically encrypted
加密卷的所有快照都会自动加密 - All volumes created from encrypted snapshots are automatically encrypted
从加密快照创建的所有卷都会自动加密 - Volumes created from unencrypted snapshots can be encrypted at the time of creation
从未加密快照创建的卷可以在创建时进行加密
- All data at rest inside the volume is encrypted
- EBS supports dynamic changes in live production volume e.g. volume type, volume size, and IOPS capacity without service interruption
EBS 支持生产环境中实时卷的动态更改,例如卷类型、卷大小和 IOPS 容量,而不会中断服务 - There are two types of EBS volumes:-
EBS 卷有两种类型:- SSD for small/random IO operations, High IOPS means number of read and write operations per second, Only SSD EBS Volumes can be used as boot volumes for EC2
SSD 适用于小型/随机 IO 操作,高 IOPS 指每秒读写操作次数,只有 SSD EBS 卷可用作 EC2 的引导卷 - HDD for large/sequential IO operations, High Throughput means number of bytes read and write per second
HDD 用于大型/顺序 I/O 操作,高吞吐量表示每秒读取和写入的字节数
- SSD for small/random IO operations, High IOPS means number of read and write operations per second, Only SSD EBS Volumes can be used as boot volumes for EC2
- EBS Volumes with two types of RAID configuration:-
EBS 卷有两种 RAID 配置:- RAID 0 (increase performance) two 500GB EBS Volumes with 4000 IOPS - creates 1000GB RAID0 Array with 8000 IOPS and 1000Mbps throughput
RAID 0(提高性能)两个 500GB EBS 卷,每个卷具有 4000 IOPS - 创建一个 1000GB 的 RAID0 阵列,具有 8000 IOPS 和 1000Mbps 的吞吐量 - RAID 1 (increase fault tolerance) two 500GB EBS Volumes with 4000 IOPS - creates 500GB RAID1 Array with 4000 IOPS and 500Mbps throughput
RAID 1(提高容错能力)两个 500GB EBS 卷,每个卷具有 4000 IOPS - 创建一个 500GB 的 RAID1 阵列,具有 4000 IOPS 和 500Mbps 的吞吐量
- RAID 0 (increase performance) two 500GB EBS Volumes with 4000 IOPS - creates 1000GB RAID0 Array with 8000 IOPS and 1000Mbps throughput
| EBS Volume Types EBS 卷类型 | Description 描述 | Usage 用法 |
|---|---|---|
| General Purpose SSD (gp2/gp3) 通用 SSD (gp2/gp3) |
Max 16000 IOPS 最大 16000 IOPS | boot volumes, dev environment, virtual desktop 启动卷、开发环境、虚拟桌面 |
| Provisioned IOPS SSD (io1/io2) 预置 IOPS SSD (io1/io2) |
16000 - 64000 IOPS, EBS Multi-Attach 16000 - 64000 IOPS、EBS 多重附加 |
critical business application, large SQL and NoSQL database workloads 关键业务应用程序、大型 SQL 和 NoSQL 数据库工作负载 |
| Throughput Optimized HDD (st1) 吞吐量优化 HDD (st1) |
Low-cost, frequently accessed, throughput intensive 低成本、频繁访问、吞吐量密集型 |
Big Data, Data warehouses, log processing 大数据、数据仓库、日志处理 |
| Cold HDD (sc1) 冷 HDD (sc1) | Lowest-cost, infrequently accessed 最低成本,不经常访问 |
Large data with lowest cost 海量数据,最低成本 |
EFS (Elastic File System)
- EFS is a POSIX-compliant file-based storage
EFS 是一个符合 POSIX 标准的文件存储 - EFS supports file systems semantics - strong read-after-write consistency and file locking
EFS 支持文件系统语义——强读写一致性和文件锁定 - highly scalable - can automatically scale from gigabytes to petabytes of data without needing to provision storage. With burst mode, the throughput increase, as file system grows in size.
高度可扩展——无需预置存储即可自动从 GB 扩展到 PB 级数据。在突发模式下,吞吐量会随着文件系统大小的增长而增加。 - highly available - stores data redundantly across multiple Availability Zones
高可用性——数据冗余存储在多个可用区中 - Network File System (NFS) that can be mounted on and accessed concurrently by thousands of EC2 in multiple AZs without sacrificing performance.
网络文件系统 (NFS),可以被数千个 EC2 实例在多个可用区 (AZ) 中同时挂载和访问,而不会牺牲性能。 - EFS file systems can be accessed by Amazon EC2 Linux instances, Amazon ECS, Amazon EKS, AWS Fargate, and AWS Lambda functions via a file system interface such as NFS protocol.
EFS 文件系统可以通过文件系统接口(如 NFS 协议)被 Amazon EC2 Linux 实例、Amazon ECS、Amazon EKS、AWS Fargate 和 AWS Lambda 函数访问。 - Performance Mode: 性能模式:
- General Purpose for most file system for low-latency file operations, good for content-management, web-serving etc.
通用模式 (General Purpose),适用于大多数文件系统,可实现低延迟文件操作,非常适合内容管理、Web 服务等场景。 - Max I/O is optimized to use with 10s, 100s, and 1000s of EC2 instances with high aggregated throughput and IOPS, slightly higher latency for file operations, good for big data analytics, media processing workflow
Max I/O 针对与数以千计的 EC2 实例配合使用进行了优化,可提供高聚合吞吐量和 IOPS,文件操作的延迟略高,非常适合大数据分析、媒体处理工作流
- General Purpose for most file system for low-latency file operations, good for content-management, web-serving etc.
- Use case: Share files, images, software updates, or computing across all EC2 instances in ECS, EKS cluster
用例:在 ECS、EKS 集群中的所有 EC2 实例之间共享文件、图像、软件更新或计算资源
FSx for Windows
- Windows-based file system supports SMB protocol & Windows NTFS
Windows 文件系统支持 SMB 协议和 Windows NTFS - supports Microsoft Active Directory (AD) integration, ACLs, user quotas
支持 Microsoft Active Directory (AD) 集成、ACL、用户配额
FSx for Lustre
- Lustre = Linux + Cluster is a POSIX-compliant parallel linux file system, which stores data across multiple network file servers
Lustre = Linux + Cluster 是一个符合 POSIX 标准的并行 Linux 文件系统,它将数据存储在多个网络文件服务器上 - High-performance file system for fast processing of workload with consistent sub-millisecond latencies, up to hundreds of gigabytes per second of throughput, and up to millions of IOPS.
用于对工作负载进行快速处理的高性能文件系统,具有一致的亚毫秒级延迟、高达数百 GB/秒的吞吐量以及高达数百万 IOPS。 - Use it for Machine learning, High-performance computing (HPC), video processing, financial modeling, genome sequencing, and electronic design automation (EDA).
将其用于机器学习、高性能计算 (HPC)、视频处理、金融建模、基因组测序和电子设计自动化 (EDA)。 - You can use FSx for Lustre as hot storage for your highly accessed files, and Amazon S3 as cold storage for rarely accessed files.
您可以将 FSx for Lustre 用作高访问文件的热存储,将 Amazon S3 用作低访问文件的冷存储。 - Seamless integration with Amazon S3 - connect your S3 data sets to your FSx for Lustre file system, run your analyses, write results back to S3, and delete your file system
与 Amazon S3 无缝集成 - 将您的 S3 数据集连接到您的 FSx for Lustre 文件系统,运行您的分析,将结果写回 S3,然后删除您的文件系统 - FSx for Lustre provides two deployment options:-
FSx for Lustre 提供两种部署选项:-- Scratch file systems - for temporary storage and short-term processing
临时文件系统 - 用于临时存储和短期处理 - Persistent file systems - for high available & persist storage and long-term processing
持久文件系统 - 用于高可用性、持久存储和长期处理
- Scratch file systems - for temporary storage and short-term processing
Database 数据库
RDS (Relational Database Service)
RDS (关系数据库服务)
- AWS Managed Service to create PostgreSQL, MySQL, MariaDB, Oracle, Microsoft SQL Server, and Amazon Aurora in the cloud
在云中创建 PostgreSQL、MySQL、MariaDB、Oracle、Microsoft SQL Server 和 Amazon Aurora 的 AWS 托管服务 - Scalability: Upto 5 Read replicas, replication is asynchronous so reads are eventually consistent.
可扩展性:最多支持 5 个只读副本,复制是异步的,因此读取最终是一致的。 - Availability use Multi-AZ Deployment, synchronous replication
可用性:使用多可用区部署,同步复制 - You can create a read replica in a different region of your running RDS instance. You pay for replication cross Region, but not for cross AZ.
您可以为正在运行的 RDS 实例创建位于不同区域的只读副本。跨区域复制需要付费,但跨可用区复制不需要。 - Automatic failover by switching the CNAME from primary to standby database
通过将 CNAME 从主数据库切换到备用数据库实现自动故障转移 - Enable Password and IAM Database Authentication to authenticate using database password and user credentials through IAM users and roles, works with MySQL and PostgreSQL
启用密码和 IAM 数据库身份验证,通过 IAM 用户和角色使用数据库密码和用户凭据进行身份验证,支持 MySQL 和 PostgreSQL - Enable Enhanced Monitoring to see percentage of CPU bandwidth and total memory consumed by each database process (OS process thread) in DB instance
启用增强监控,以查看数据库实例中每个数据库进程(操作系统进程线程)消耗的 CPU 带宽百分比和总内存 - Enable Automated Backup for daily storage volume snapshot of your DB instance with retention-period from 1 day (default from CLI, SDK) to 7 days (default from console) to 35 days (max). Use AWS Backup service for retention-period of 90 days.
启用自动备份,用于数据库实例的每日存储卷快照,保留期从 1 天(CLI、SDK 默认值)到 7 天(控制台默认值)再到 35 天(最大值)。使用 AWS Backup 服务可实现 90 天的保留期。 - To encrypt an unencrypted RDS DB instance, take a snapshot, copy snapshot and encrypt new snapshot with AWS KMS. Restore the DB instance with the new encrypted snapshot.
要加密未加密的 RDS 数据库实例,请拍摄快照,复制快照并使用 AWS KMS 加密新快照。使用新的加密快照恢复数据库实例。
Amazon Aurora
- Amazon fully managed relational database compatible with MySQL and PostgreSQL
与 MySQL 和 PostgreSQL 兼容的 Amazon 完全托管的关系数据库 - Provide 5x throughput of MySQL and 3x throughput of PostgreSQL
提供比 MySQL 高 5 倍的吞吐量,比 PostgreSQL 高 3 倍的吞吐量 - Aurora Global Database is single database span across multiple AWS regions, enable low-latency global reads and disaster recovery from region-wide outage. Use global database for disaster recovery having RPO of 1 second and RTO of 1 minute.
Aurora 全球数据库是跨越多个 AWS 区域的单一数据库,可实现低延迟的全球读取和区域范围中断的灾难恢复。使用全球数据库进行灾难恢复,RPO 为 1 秒,RTO 为 1 分钟。 - Aurora Serverless capacity type is used for on-demand auto-scaling for intermittent, unpredictable, and sporadic workloads.
Aurora Serverless 容量类型用于间歇性、不可预测和零星工作负载的按需自动扩展。 - Typically operates as a DB cluster consisting of one or more DB instances and a cluster volume that manages cluster data with each AZ having a copy of volume.
通常作为一个数据库集群运行,该集群包含一个或多个数据库实例和一个集群卷,集群卷管理集群数据,每个可用区都有一个卷副本。- Primary DB instance - Only one primary instance, supports both read and write operation
主数据库实例 - 只有一个主实例,支持读写操作 - Aurora Replica - Upto 15 replicas spread across different AZ, supports only read operation, automatic failover if primary DB instance fails, high availability
Aurora 副本 - 最多 15 个副本分布在不同的可用区 (AZ),仅支持读取操作,主数据库实例故障时自动故障转移,高可用性
- Primary DB instance - Only one primary instance, supports both read and write operation
- Connections Endpoints
连接端点
- Cluster endpoint - only one cluster endpoint, connects to primary DB instance, only this endpoint can perform write (DDL, DML) operations
集群端点 - 只有一个集群端点,连接到主数据库实例,只有此端点可以执行写入(DDL、DML)操作 - Reader endpoint - one reader endpoint, provides load-balancing for all read-only connections to read from Aurora replicas
读取器端点 - 一个读取器端点,为所有只读连接提供负载均衡,以便从 Aurora 副本读取 - Custom endpoint - Upto 5 custom endpoint, read or write from a specified group of DB instances from Cluster, used for specialized workloads to route traffic to high-capacity or low-capacity instances
自定义端点 - 最多 5 个自定义端点,可从集群中指定的数据库实例组进行读取或写入,用于专门的工作负载将流量路由到高容量或低容量实例 - Instance endpoint - connects to specified DB instance directly, generally used to improve connection speed after failover
实例端点 - 直接连接到指定的数据库实例,通常用于故障转移后提高连接速度
- Cluster endpoint - only one cluster endpoint, connects to primary DB instance, only this endpoint can perform write (DDL, DML) operations
DynamoDB
- AWS proprietary, Serverless, managed NoSQL database
AWS 专有、无服务器、托管的 NoSQL 数据库 - Use to store JSON documents, or session data
用于存储 JSON 文档或会话数据 - Use as distributed serverless cache with single-digit millisecond performance
用作分布式无服务器缓存,具有个位数毫秒级性能 - Planned Capacity provision WCU & RCU, can enable auto-scaling, good for predictable workloads
预置容量预置 WCU 和 RCU,可启用自动扩展,适用于可预测的工作负载 - On-demand Capacity unlimited WCU & RCU, more expensive, good for unpredictable workloads where read & write are less (low throughput)
按需容量提供无限的 WCU 和 RCU,成本更高,适用于读写次数较少(吞吐量较低)且不可预测的工作负载 - Add DAX (DynamoDB Accelerator) cluster in front of DynamoDB to cache frequently read values and offload the heavy read on hot keys of DynamoDB, prevent
ProvisionedThroughputExceededException
在 DynamoDB 前面添加 DAX (DynamoDB Accelerator) 集群,以缓存频繁读取的值并分载 DynamoDB 热键上的繁重读取,防止ProvisionedThroughputExceededException - Enable DynamoDB Streams to trigger events on database and integrate with lambda function for e.g. send welcome email to user added into the table.
启用 DynamoDB Streams 以触发数据库上的事件,并与 Lambda 函数集成,例如向添加到表中的用户发送欢迎电子邮件。 - Use DynamoDB Global Table to serve the data globally. You must enable DynamoDB Streams first to create global table.
使用 DynamoDB Global Table 为全球提供数据。您必须先启用 DynamoDB Streams 才能创建全局表。 - You can use Amazon DMS (Data Migration Service) to migrate from Mongo, Oracle, MySQL, S3, etc. to DynamoDB
您可以使用 Amazon DMS (Data Migration Service) 将数据从 Mongo、Oracle、MySQL、S3 等迁移到 DynamoDB
ElastiCache
- AWS Managed Service for Redis or Memcached
AWS 托管的 Redis 或 Memcached 服务 - Use as distributed cache with sub-millisecond performance
用作分布式缓存,提供亚毫秒级性能 - Elasticache for Redis
- Offers Multi-AZ with Auto-failover, Cluster mode
提供多可用区(Multi-AZ)和自动故障转移、集群模式 - Use password/token to access data using Redis Auth
使用密码/令牌通过 Redis Auth 访问数据 - HIPAA Compliant 符合 HIPAA 标准
- Offers Multi-AZ with Auto-failover, Cluster mode
- Elasticache for Memcached
- Intended for use in speeding up dynamic web applications
旨在加速动态 Web 应用程序 - Not HIPAA Compliant 不符合 HIPAA 标准
- Intended for use in speeding up dynamic web applications
Redshift
- Columnar Database, OLAP (online analytical processing)
列式数据库、OLAP(在线分析处理) - supports Massive Parallel Query Execution (MPP)
支持大规模并行查询执行 (MPP) - Use for Data Analytics and Data warehousing
用于数据分析和数据仓库 - Integrate with Business Intelligence (BI) tools like AWS Quicksight or Tableau for analytics
与 AWS Quicksight 或 Tableau 等商业智能 (BI) 工具集成以进行分析 - Use Redshift Spectrum to query S3 bucket directly without loading data in Redshift
使用 Redshift Spectrum 直接查询 S3 存储桶,无需将数据加载到 Redshift 中
Amazon Kinesis
Amazon Kinesis is a fully managed service for collecting, processing and analyzing streaming real-time data in the cloud. Real-time data generally comes from IoT devices, gaming applications, vehicle tracking, clickstream, etc.
Amazon Kinesis 是一项全托管服务,用于在云中收集、处理和分析实时流数据。实时数据通常来自物联网设备、游戏应用程序、车辆跟踪、点击流等。
- Kinesis Data Streams capture, process and store data streams.
Kinesis Data Streams 捕获、处理和存储数据流。- Producer can be Amazon Kinesis Agent, SDK, or Kinesis Producer Library (KPL)
生产者可以是 Amazon Kinesis Agent、SDK 或 Kinesis Producer Library (KPL) - Consumer can be Kinesis Data Analytics, Kinesis Data Firehose, or Kinesis Consumer Library (KCL)
消费者可以是 Kinesis Data Analytics、Kinesis Data Firehose 或 Kinesis Consumer Library (KCL) - Data Retention period from 24 hours (default) to 365 days (max).
数据保留期从 24 小时(默认)到 365 天(最多)。 - Order is maintained at Shard (partition) level.
顺序在分片(分区)级别维护。
- Producer can be Amazon Kinesis Agent, SDK, or Kinesis Producer Library (KPL)
- Kinesis Data Firehose loads data streams into AWS data stores such as S3, Amazon Redshift and ElastiSearch. Transform data using lambda functions and store failed data in another S3 bucket.
Kinesis Data Firehose 将数据流加载到 S3、Amazon Redshift 和 ElastiSearch 等 AWS 数据存储中。使用 Lambda 函数转换数据,并将失败的数据存储在另一个 S3 存储桶中。 - Kinesis Data Analytics analyzes data streams with SQL or Apache Flink
Kinesis Data Analytics 使用 SQL 或 Apache Flink 分析数据流 - Kinesis Video Streams capture, process and store video streams
Kinesis Video Streams 可捕获、处理和存储视频流
Amazon EMR
- EMR = Elastic MapReduce
- Big data cloud platform for processing vast data using open source tools such as Hadoop, Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto.
用于处理海量数据的云大数据平台,使用 Hadoop、Apache Spark、Apache Hive、Apache HBase、Apache Flink、Apache Hudi 和 Presto 等开源工具。 - EMR can be used to perform data transformation workloads - Extract, transform, load (ETL)
EMR 可用于执行数据转换工作负载 - 提取、转换、加载 (ETL) - Use case: Analyze Clickstream data from S3 using Apache Spark and Hive to deliver more effective ads
用例:使用 Apache Spark 和 Hive 分析 S3 中的点击流数据,以提供更有效的广告
Neptune
- Graph Database 图数据库
- Use case: high relationship data, social networking data, knowledge graphs (Wikipedia)
用例:高关系数据、社交网络数据、知识图谱(维基百科)
ElasticSearch
- Amazon-managed Elastic Search service
Amazon 管理的 Elastic Search 服务 - Integration with Kinesis Data Firehose, AWS IoT, and CloudWatch logs
与 Kinesis Data Firehose、AWS IoT 和 CloudWatch 日志集成 - Use case: Search, indexing, partial or fuzzy search
用例:搜索、索引、部分或模糊搜索
Migration 迁移
AWS Snow Family AWS Snow 系列
- AWS snow family is used for on-premises large-scale data migration to S3 buckets and processing data at low network locations.
AWS Snow 系列用于将本地大规模数据迁移到 S3 存储桶,并在网络条件不佳的位置处理数据。 - You need to install AWS OpsHub software to transfer files from your on-premises machine to snow device.
您需要安装 AWS OpsHub 软件才能将文件从本地计算机传输到 Snow 设备。 - You can not migrate directly to Glacier, you should create S3 first with a lifecycle policy to move files to Glacier. You can transfer to Glacier directly using DataSync.
您不能直接迁移到 Glacier,您应该先创建 S3,并设置生命周期策略将文件移至 Glacier。您可以使用 DataSync 直接传输到 Glacier。
| Family Member 家庭成员 | Storage 存储 | RAM | Migration Type 迁移类型 | DataSync | Migration Size 迁移规模 |
|---|---|---|---|---|---|
| Snowcone | 8TB | 4GB | online & offline 在线和离线 | yes 是 | GBs and TBs GB 和 TB |
| Snowball Edge Storage Optimized Snowball Edge 存储优化型 |
80TB | 80GB | offline 离线 | no 否 | petabyte scale PB 级 |
| Snowball Edge Compute Optimized Snowball Edge 计算优化型 |
42TB | 208GB | offline 离线 | no 否 | petabyte scale PB 级 |
| Snowmobile 雪地摩托 | 100PB | N/A 不适用 | offline 离线 | no 否 | exabyte scale 艾字节规模 |
AWS Storage Gateway
Store gateway is a hybrid cloud service to move on-premises data to the cloud and connect on-premises applications with cloud storage.
存储网关是一项混合云服务,可将本地数据迁移到云端,并将本地应用程序连接到云存储。
| Storage Gateway | Protocol 协议 | Backed by 由...支持 | Use Case 用例 |
|---|---|---|---|
| File Gateway 文件网关 | NFS & SMB NFS 和 SMB | S3 -> S3-IA, S3 One Zone-IA S3 -> S3-IA、S3 One Zone-IA |
Store files as object in S3, with a local cache for low-latency access, with user auth using Active Directory 将文件作为对象存储在 S3 中,并提供本地缓存以实现低延迟访问,通过 Active Directory 进行用户身份验证 |
| FSx File Gateway FSx 文件网关 | SMB & NTFS SMB 和 NTFS | FSx -> S3 | Windows or Lustre File Server, integration with Microsoft AD Windows 或 Lustre 文件服务器,与 Microsoft AD 集成 |
| Volume Gateway 卷网关 | iSCSI | S3 -> EBS | Block storage in S3 with backups as EBS snapshots. S3 中的块存储,备份为 EBS 快照。 Use Cached Volume for low-latency and Stored Volume for scheduled backups 使用缓存卷 (Cached Volume) 实现低延迟,使用存储卷 (Stored Volume) 进行计划备份 |
| Tape Gateway 磁带网关 | iSCSI VTL | S3 -> S3 Glacier & Glacier Deep Archive S3 Glacier 和 Glacier Deep Archive |
Backup data in S3 and archive in Glacier using tape-based process 在 S3 中备份数据,并使用基于磁带的流程将其归档到 Glacier |
AWS DataSync
- AWS DataSync is used for Data Migration at a large scale from On-premises storage systems (using NFS and SMB storage protocol) to AWS storage (like S3, EFS, or FSx for Windows, AWS Snowcone) over the internet
AWS DataSync 用于大规模数据迁移,将本地存储系统(使用 NFS 和 SMB 存储协议)中的数据迁移到 AWS 存储(如 S3、EFS 或 FSx for Windows、AWS Snowcone)上,通过互联网进行传输 - AWS DataSync is used to archive on-premises cold data directly to S3 Glacier or S3 Glacier Deep Archive
AWS DataSync 用于将本地冷数据直接存档到 S3 Glacier 或 S3 Glacier Deep Archive - AWS DataSync can migrate data directly to any S3 storage class
AWS DataSync 可将数据直接迁移到任何 S3 存储类别 - Use DataSync with Direct Connect to migrate data over secure private network to AWS service associated with VPC endpoint.
将 DataSync 与 Direct Connect 结合使用,通过安全的专用网络将数据迁移到与 VPC 端点关联的 AWS 服务。
AWS Backup
- AWS Backup to centrally manage and automate the backup process for EC2 instances, EBS Volumes, EFS, RDS databases, DynamoDB tables, FSx for Lustre, FSx for Window server, and Storage Gateway volumes
AWS Backup 可集中管理和自动化 EC2 实例、EBS 卷、EFS、RDS 数据库、DynamoDB 表、FSx for Lustre、FSx for Window Server 和 Storage Gateway 卷的备份过程 - Use case: Automate backup of RDS with 90 days retention policy. (Automate backup using RDS directly has max 35 days retention period)
用例:自动化 RDS 备份,保留期为 90 天。(直接使用 RDS 自动化备份的最大保留期为 35 天)
Database Migration Service (DMS)
数据库迁移服务 (DMS)
- DMS helps you to migrate database to AWS with source remaining fully operational during the migration, minimizing the downtime
DMS 可帮助您将数据库迁移到 AWS,在迁移过程中源数据库保持完全运行,从而最大限度地减少停机时间 - You need to select EC2 instance to run DMS in order to migrate (and replicate) database from source => target e.g. On-premise => AWS, AWS => AWS, or AWS => On-premise
您需要选择 EC2 实例来运行 DMS,以便将数据库从源迁移(和复制)到目标,例如本地 => AWS、AWS => AWS 或 AWS => 本地 - DMS supports both homogenous migrations such as On-premise PostgreSQL => AWS RDS PostgreSQL and heterogenous migrations such as SQL Server or Oracle => MySQL, PostgreSQL, Aurora, or Teradata or Oracle => Amazon Redshift
DMS 支持同构迁移,例如本地 PostgreSQL => AWS RDS PostgreSQL,以及异构迁移,例如 SQL Server 或 Oracle => MySQL、PostgreSQL、Aurora 或 Teradata,或 Oracle => Amazon Redshift - You need to run AWS SCT (Schema Conversion Tool) at source for heterogenous migrations
对于异构迁移,您需要针对源运行 AWS SCT(Schema Conversion Tool)
AWS Application Migration Service (MGN)
- Migrate virtual machines from VMware vSphere, Microsoft Hyper-V or Microsoft Azure to AWS
将虚拟机从 VMware vSphere、Microsoft Hyper-V 或 Microsoft Azure 迁移到 AWS - AWS Application Migration Service (new) utilizes continuous, block-level replication and enables cutover windows measured in minutes
AWS Application Migration Service(新增)利用连续的块级复制,并将切换窗口缩短至几分钟。 - AWS Server Migration Service (legacy) utilizes incremental, snapshot-based replication and enables cutover windows measured in hours.
AWS Server Migration Service(旧版)利用增量的基于快照的复制,并将切换窗口缩短至几小时。
Networking & Content Delivery
网络与内容分发
Amazon VPC
- CIDR block — Classless Inter-Domain Routing. An internet protocol address allocation and route aggregation methodology. CIDR block has two components - Base IP (WW.XX.YY.ZZ) and Subnet Mask (/0 to /32) for e.g.
CIDR 块 — 无类别域间路由。一种互联网协议地址分配和路由聚合方法。CIDR 块有两个组成部分 — 基本 IP (WW.XX.YY.ZZ) 和子网掩码 (/0 到 /32),例如:- 192.168.0.0/32 means 232-32= 1 single IP
192.168.0.0/32 表示 2 32-32 = 1 个单独的 IP - 192.168.0.0/24 means 232-24= 256 IPs ranging from 192.168.0.0 to 192.168.0.255 (last number can change)
192.168.0.0/24 表示 2^8 = 256 个 IP 地址,范围从 192.168.0.0 到 192.168.0.255(最后一个数字可变) - 192.168.0.0/16 means 232-16= 65,536 IPs ranging from 192.168.0.0 to 192.168.255.255 (last 2 numbers can change)
192.168.0.0/16 表示 2^16 = 65,536 个 IP 地址,范围从 192.168.0.0 到 192.168.255.255(最后两个数字可变) - 192.168.0.0/8 means 232-8= 16,777,216 IPs ranging from 192.0.0.0 to 192.255.255.255 (last 3 numbers can change)
192.168.0.0/8 表示 2^24 = 16,777,216 个 IP 地址,范围从 192.0.0.0 到 192.255.255.255(最后三个数字可变) - 0.0 0.0.0.0/0 means 232-0= All IPs ranging from 0.0.0.0 to 255.255.255.255 (all 4 numbers can change)
0.0.0.0/0 表示 2^32 = 所有 IP 地址,范围从 0.0.0.0 到 255.255.255.255(所有四个数字都可变)
- 192.168.0.0/32 means 232-32= 1 single IP
- VPC (Virtual Private Cloud)
VPC (虚拟私有云)- A virtual network dedicated to your AWS account.
专属于您的 AWS 账户的虚拟网络。 - VPCs are region specific they do not span across regions
VPC 特定于区域,它们不会跨越区域。 - Every region comes with default VPC. You can create upto 5 VPC per region.
每个区域都附带默认 VPC。您可以在每个区域创建最多 5 个 VPC。 - You can assign Max 5 IPv4 CIDR blocks per VPC with min block size /28 = 16 IPs and max size /16 = 65,536 IPs. You can assign Secondary IP CIDR range later if primary CIDR IPs are exhausted.
每个 VPC 最多可分配 5 个 IPv4 CIDR 块,最小块大小为 /28(16 个 IP),最大块大小为 /16(65,536 个 IP)。如果主 CIDR IP 地址耗尽,您可以稍后分配辅助 IP CIDR 范围。 - Only private IP ranges are allowed in IPv4 CIDR block - 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16.
IPv4 CIDR 块只允许使用私有 IP 地址范围:10.0.0.0/8、172.16.0.0/12、192.168.0.0/16。 - You VPC CIDR block should not overlap with other VPC networks within your AWS account.
您的 VPC CIDR 块不应与您 AWS 账户中的其他 VPC 网络重叠。 - Enable DNS resolution and DNS hostnames at VPC, EC2 instances created in that VPC will be assigned a domain name address
在 VPC 中启用 DNS 解析和 DNS 主机名,在该 VPC 中创建的 EC2 实例将被分配一个域名地址。
- A virtual network dedicated to your AWS account.
- VPC Peering VPC 对等连接
- VPC peering connect two VPC over a direct network route using private IP addresses
VPC 对等连接使用私有 IP 地址通过直接网络路由连接两个 VPC - Instances on peered VPCs behave just like they are on the same network
对等 VPC 上的实例就像在同一网络上一样运行 - Must have no overlapping CIDR Blocks
CIDR 块不能重叠 - VPC peering connection are not transitive i.e. VPC-A peering VPC-B and VPC-B peering to VPC-C doesn’t mean VPC-A peering VPC-C
VPC 对等连接不是传递性的,也就是说,VPC-A 对等 VPC-B,VPC-B 对等 VPC-C,并不意味着 VPC-A 对等 VPC-C - Route tables must be updated in both VPC that are peered so that instances can communicate
必须在两个对等的 VPC 中更新路由表,以便实例能够通信 - Can connect one VPC to another in same or different region. VPC peering in the different region called VPC inter-region peering
可以将一个 VPC 连接到同一区域或不同区域的另一个 VPC。不同区域的 VPC 对等连接称为 VPC 跨区域对等连接 - Can connect one VPC to another in same or different AWS account
可以将一个 VPC 连接到同一 AWS 账户或不同 AWS 账户的另一个 VPC
- VPC peering connect two VPC over a direct network route using private IP addresses
- Subnet 子网
- A range of IP addresses in your VPC
VPC 中的 IP 地址范围 - Each subnet is tied to one Availability Zone, one Route Table, and one Network ACL
每个子网都关联一个可用区、一个路由表和一个网络 ACL - You assign one CIDR block per Subnet within CIDR range of your VPC. Should not overlap with other Subnet’s CIDR in your VPC.
您为 VPC 的 CIDR 范围内的每个子网分配一个 CIDR 块。不应与 VPC 中其他子网的 CIDR 重叠。 - AWS reserve 5 IP address (first 4 and last 1) from CIDR block in each Subnet. For e.g. If you need 29 IP addresses to use, your should choose CIDR /26 = 64 IP and not /27 = 32 IP, since 5 IPs are reserved and can not use.
AWS 会从每个子网的 CIDR 块中预留 5 个 IP 地址(前 4 个和最后 1 个)。例如,如果您需要使用 29 个 IP 地址,则应选择 CIDR /26 = 64 个 IP,而不是 /27 = 32 个 IP,因为有 5 个 IP 地址被预留且无法使用。 - Enable Auto assign public IPv4 address in public subnets, EC2 instances created in public subnets will be assigned a public IPv4 address
在公有子网中启用自动分配公有 IPv4 地址,在公有子网中创建的 EC2 实例将被分配一个公有 IPv4 地址。 - If you have 3 AZ in a region then you create a total of 6 subnets - 3 private subnets (1 in each AZ) and 3 public subnets (1 in each AZ) for multi-tier and highly-available architecture. API gateway and ALB reside in the public subnet, EC2 instances, Lambda, Database resides in private subnet.
如果您在一个区域中有 3 个可用区 (AZ),那么您将创建总共 6 个子网 - 3 个私有子网(每个可用区 1 个)和 3 个公有子网(每个可用区 1 个),以实现多层和高可用性架构。API Gateway 和 ALB 位于公有子网中,EC2 实例、Lambda、数据库位于私有子网中。
- A range of IP addresses in your VPC
- Route Table 路由表
- A set of rules, called routes, are used to determine where network traffic is directed.
一组称为路由的规则用于确定网络流量的去向。 - Each subnet in your VPC must be associated with a route table.
VPC 中的每个子网都必须与路由表相关联。 - A subnet can only be associated with one route table at a time
一个子网一次只能与一个路由表相关联。 - You can associate multiple subnets with the same route table For e.g. you create 4 subnets in your VPC where 2 subnets associated with one route table with no internet access rules known as private subnets and another 2 subnets are associated with another route table with internet access rules known as public subnets
您可以将多个子网与同一个路由表相关联。例如,在 VPC 中创建 4 个子网,其中 2 个子网与一个没有互联网访问规则的路由表相关联,称为私有子网,另外 2 个子网与另一个具有互联网访问规则的路由表相关联,称为公有子网。 - Each Route table route has Destination like IPs and Target like local, IG, NAT, VPC endpoint etc.
每个路由表的路由都有一个目标(Destination),例如 IP 地址,以及一个目标(Target),例如 local、IG、NAT、VPC endpoint 等。 - public subnet is a subnet that’s associated with a route table having rules to connect to internet using Internet Gateway.
公有子网是与路由表关联的子网,该路由表具有通过 Internet 网关连接到 Internet 的规则。 - private subnet is a subnet that’s associated with a route table having no rules to connect to internet using Internet Gateway. When our Subnets connected to the Private Route Table need access to the internet, we set up a NAT Gateway in the public Subnet. We then add a rule to our Private Route Table saying that all traffic looking to go to the internet should point to the NAT Gateway.
私有子网是与路由表关联的子网,该路由表没有通过 Internet 网关连接到 Internet 的规则。当我们的子网连接到私有路由表需要访问 Internet 时,我们在公有子网中设置一个 NAT 网关。然后,我们向私有路由表添加一条规则,指示所有要访问 Internet 的流量都应指向 NAT 网关。
- A set of rules, called routes, are used to determine where network traffic is directed.
- Internet Gateway Internet 网关
- Internet Gateway allows AWS instances public subnet access to the internet and accessible from the internet
Internet 网关允许 AWS 实例公有子网访问 Internet,并可从 Internet 访问。 - Each Internet Gateway is associated with one VPC only, and each VPC has one Internet Gateway only (one-to-one mapping)
每个 Internet Gateway 仅与一个 VPC 相关联,而每个 VPC 也仅有一个 Internet Gateway(一对一映射)
- Internet Gateway allows AWS instances public subnet access to the internet and accessible from the internet
- NAT Gateway NAT 网关
- NAT Gateway allows AWS instances in private subnet access to the internet but not accessible from the internet
NAT 网关允许私有子网中的 AWS 实例访问互联网,但不能从互联网访问 - NAT Gateway (latest) is a managed service that launches redundant instances within the selected AZ (can survive failure of EC2 instance)
NAT 网关(最新版)是一项托管服务,可在选定的可用区内启动冗余实例(可应对 EC2 实例故障) - NAT Instances (legacy) are individual EC2 instances. Community AMIs exist to launch NAT Instances. Works same as NAT Gateway.
NAT 实例(旧版)是独立的 EC2 实例。存在社区 AMI 用于启动 NAT 实例。其工作方式与 NAT 网关相同。 - You can only have 1 NAT Gateway inside 1 AZ (cannot span AZ).
您在一个可用区(AZ)内只能拥有 1 个 NAT 网关(无法跨可用区)。 - You should create a NAT Gateway in each AZ for high availability so that if a NAT Gateway goes down in one AZ, instances in other AZs are still able to access the internet.
您应该为每个可用区创建一个 NAT 网关以实现高可用性,这样,如果一个可用区的 NAT 网关发生故障,其他可用区的实例仍然能够访问互联网。 - NAT Gateway resides in public subnet. You must allocate Elastic IP to NAT Gateway. You must add NAT Gateway in private subnet route table with Destination
0.0.0.0/0and Targetnat-gateway-id
NAT 网关位于公有子网中。您必须为 NAT 网关分配弹性 IP。您必须在私有子网路由表中添加 NAT 网关,并将目标设置为0.0.0.0/0,目标设置为nat-gateway-id - NAT Gateways are automatically assigned a public IP address
NAT 网关会自动分配一个公共 IP 地址 - NAT Gateway/Instances works with IPv4
NAT 网关/实例支持 IPv4 - NAT Gateway cannot be shared across VPC
NAT 网关不能跨 VPC 共享 - NAT Gateway cannot be used as Bastions whereas Nat Instance can
NAT 网关不能用作堡垒机,而 NAT 实例可以
- NAT Gateway allows AWS instances in private subnet access to the internet but not accessible from the internet
- Bastion Host 堡垒机
- Bastian Host is an individual small EC2 instance in public subnet. Community AMIs exist to launch Bastion Host.
堡垒机是位于公有子网中的单个小型 EC2 实例。存在用于启动堡垒机的社区 AMI。 - Bastian Host are used to access AWS instances in private subnet with private IPv4 address via SSH at port 22
堡垒机用于通过 SSH 在端口 22 上访问私有子网中具有私有 IPv4 地址的 AWS 实例
- Bastian Host is an individual small EC2 instance in public subnet. Community AMIs exist to launch Bastion Host.
- Egress Only Internet Gateway
仅出站互联网网关- Works same as NAT Gateway, but for IPv6
与 NAT 网关功能相同,但适用于 IPv6 - Egress Only means - outgoing traffic only
仅出站表示 - 仅出站流量 - IPv6 are public by default. Egress Only Internet Gateway allows IPv6 instances in private subnet access to the internet but accessible from internet
IPv6 默认是公共的。仅出口互联网网关允许私有子网中的 IPv6 实例访问互联网,但可以从互联网访问
- Works same as NAT Gateway, but for IPv6
- Network ACL 网络 ACL
- Network Access Control List is commonly known as NACL
网络访问控制列表通常称为 NACL - Optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets.
VPC 的可选安全层,充当防火墙,用于控制进出多个子网的流量。 - VPCs comes with a modifiable default NACL. By default, it allows all inbound and outbound traffic.
VPC 附带一个可修改的默认 NACL。默认情况下,它允许所有入站和出站流量。 - You can create custom NACL. By default, each custom network ACL denies all inbound and outbound traffic until you add rules.
您可以创建自定义 NACL。默认情况下,每个自定义网络 ACL 会拒绝所有入站和出站流量,直到您添加规则。 - Each subnet within a VPC must be associated with only 1 NACL
VPC 中的每个子网只能关联一个 NACL。- If you don’t specify, auto associate with default NACL.
如果您不指定,则会自动关联到默认 NACL。 - If you associate with a new NACL, auto-remove previous association
如果您关联新的 NACL,请自动移除之前的关联 - Apply to all instances in associated subnet
应用于关联子网中的所有实例
- If you don’t specify, auto associate with default NACL.
- Support both Allow and Deny rules
支持允许和拒绝规则 - Stateless means explicit rules for inbound and outbound traffic. return traffic must be explicitly allowed by rules
无状态意味着需要为入站和出站流量设置显式规则。返回流量必须由规则显式允许 - Evaluate rules in number order, starting with lowest numbered rule. NACL rules have number(1 to 32766) and higher precedence to lowest number for e.g.
#100 ALLOW <IP>and#200 DENY <IP>means IP is allowed
按照数字顺序评估规则,从编号最低的规则开始。NACL 规则的编号范围是 1 到 32766,数字越小优先级越高。例如,#100 ALLOW <IP>和#200 DENY <IP>表示允许 IP。 - Each network ACL also includes a rule with rule number as asterisk
*. If any of the numbered rule doesn’t match, it’s denies the traffic. You can’t modify or remove this rule.
每个网络 ACL 还包含一个编号为星号*的规则。如果任何编号规则都不匹配,则会拒绝流量。您无法修改或删除此规则。 - Recommended creating numbered rules in increments (for example, increments of 10 or 100) so that you can insert new rules where you need to later on.
建议以递增的方式(例如,递增 10 或 100)创建编号规则,以便以后在需要时插入新规则。 - You can block a single IP address using NACL, which you can’t do using Security Group
您可以使用 NACL 阻止单个 IP 地址,而安全组则无法做到这一点。
- Network Access Control List is commonly known as NACL
- Security Group 安全组
- Control inbound and outbound traffic at EC2 instance level
在 EC2 实例级别控制入站和出站流量 - Support Allow rules only. All traffic is deny by default unless a rule specifically allows it.
仅支持允许规则。默认情况下,所有流量均被拒绝,除非有规则明确允许。 - Stateful means return traffic is automatically allowed, regardless of any rules
有状态意味着返回流量会自动允许,无论任何规则如何 - When you first create a security group, It has no inbound rule means denies all incoming traffic and one outbound rule that allows all outgoing traffic.
创建安全组时,它没有入站规则,这意味着拒绝所有传入流量,并且有一个出站规则允许所有传出流量。 - You can specify a source in the security group rule to be an IP range, A specific IP (/32), or another security group
您可以在安全组规则中指定源为 IP 地址范围、特定 IP (/32) 或另一个安全组。 - One security group can be associated with multiple instances across multiple subnets
一个安全组可以关联到多个子网中的多个实例。 - One EC2 instance can be associated with multiple Security Groups and rules are permissive (instead of restrictive). Meaning if you have one security group which has no Allow and you add an allow in another then it will Allow
一个 EC2 实例可以关联到多个安全组,并且规则是允许性的(而不是限制性的)。这意味着如果您有一个安全组没有任何允许规则,然后您在另一个安全组中添加了一个允许规则,那么它将允许流量。 - Evaluate all rules before deciding whether to allow traffic
评估所有规则以决定是否允许流量
- Control inbound and outbound traffic at EC2 instance level
- Transit gateway is used to create transitive VPC peer connections between thousands of VPCs
Transit Gateway 用于在数千个 VPC 之间创建传递性 VPC 对等连接- hub-and-spoke (star) connection
中心辐射型(星型)连接 - Support IP Multicast (not supported by any other AWS service)
支持 IP 组播(其他任何 AWS 服务均不支持) - Use as gateway at Amazon side in VPN connection, not at customer side
在 Amazon 端用作 VPN 连接的网关,而不是在客户端 - Can be attached to - one or more VPCs, AWS Direct Connect gateway, VPN Connection, peering connection to another Transit gateway
可附加到 - 一个或多个 VPC、AWS Direct Connect 网关、VPN 连接、与其他 Transit Gateway 的对等连接
- hub-and-spoke (star) connection
- VPC Flow Logs VPC 流日志
- Allows you to capture IP traffic information in-and-out of Network Interfaces within your VPC
允许您捕获 VPC 内网络接口的进出 IP 流量信息 - You can turn on Flow Logs at VPC, Subnet or Network Interface level
您可以为 VPC、子网或网络接口启用流日志 - VPC Flow logs can be delivered to S3 or CloudWatch logs. Query VPC flow logs using Athena on S3 or CloudWatch logs insight
VPC 流日志可以交付到 S3 或 CloudWatch Logs。使用 Athena 查询 S3 中的 VPC 流日志,或使用 CloudWatch Logs Insights 查询 - VPC Flow logs have - Log Version
version, AWS Account Idaccount-id, Network Interface Idinterface-id, Source IP address and portsrcaddr&srcport, destination IP address and portdstaddr&dstport
VPC 流日志包含 - 日志版本version、AWS 账户 IDaccount-id、网络接口 IDinterface-id、源 IP 地址和端口srcaddr&srcport、目标 IP 地址和端口dstaddr&dstport - VPC Flow logs contain source and destination IP addresses (not hostnames)
VPC 流日志包含源和目标 IP 地址(而非主机名)
- Allows you to capture IP traffic information in-and-out of Network Interfaces within your VPC
- IPv6 are all public addresses, all instances with IPv6 are publicly accessible. for private ranges, we still use IPv4. You can not disable IPv4. If you enable IPv6 for VPC and subnets, then your EC2 instance would get private IPv4 and public IPv6
IPv6 都是公共地址,所有具有 IPv6 的实例都可以公开访问。对于私有范围,我们仍然使用 IPv4。您无法禁用 IPv4。如果您为 VPC 和子网启用 IPv6,那么您的 EC2 实例将获得私有 IPv4 和公共 IPv6 - Cost nothing: VPCs, Route Tables, NACLs, Internet Gateway, Security Groups, Subnets, VPC Peering
免费:VPC、路由表、NACL、Internet 网关、安全组、子网、VPC 对等连接 - Cost money: NAT Gateway, VPC Endpoints, VPN Gateway, Customer Gateway
收费:NAT 网关、VPC 端点、VPN 网关、客户网关
VPC endpoints VPC 端点
- VPC endpoints allow your VPC to connect to other AWS services privately within the AWS network
VPC 端点允许您的 VPC 在 AWS 网络内私密地连接到其他 AWS 服务 - Traffic between your VPC and other services never leaves the AWS network
您的 VPC 与其他服务之间的流量永远不会离开 AWS 网络 - Eliminates the need for an Internet Gateway and NAT Gateway for instances in public and private subnets to access the other AWS services through public internet.
消除了公有子网和私有子网中的实例通过公共互联网访问其他 AWS 服务时对 Internet 网关和 NAT 网关的需求。 - There are two types of VPC endpoints:-
VPC 端点有两种类型:-- Interface endpoint are Elastic Network Interfaces (ENI) with a private IP address. They serve as an entry point for traffic going to most of the AWS services. Interface endpoints are provided by AWS PrivateLink and have an hourly fee and per GB usage cost.
接口端点是具有私有 IP 地址的弹性网络接口 (ENI)。它们是流向大多数 AWS 服务流量的入口点。接口端点由 AWS PrivateLink 提供,并按小时收费,同时还有每 GB 使用量的费用。 - Gateway endpoint is a gateway that is a target for a specific route in your route table, use to destined for a supported AWS service. Currently supports only Amazon S3 and DynamoDB. Gateway endpoints are free
网关端点是您路由表中的特定路由的目标网关,用于指向支持的 AWS 服务。目前仅支持 Amazon S3 和 DynamoDB。网关端点是免费的。
- Interface endpoint are Elastic Network Interfaces (ENI) with a private IP address. They serve as an entry point for traffic going to most of the AWS services. Interface endpoints are provided by AWS PrivateLink and have an hourly fee and per GB usage cost.
- If EC2 instance wants to access S3 bucket or DynamoDB in different region privately within AWS network then we first need VPC inter-region peering to connect VPC in both regions and then use VPC gateway endpoint for S3 or DynamoDB.
如果 EC2 实例想要在 AWS 网络内私有地访问不同区域的 S3 存储桶或 DynamoDB,则首先需要 VPC 跨区域对等互连来连接两个区域的 VPC,然后使用 VPC 网关终端节点来访问 S3 或 DynamoDB。 - AWS PrivateLink is VPC interface endpoint service to expose a particular service to 1000s of VPCs cross-accounts
AWS PrivateLink 是一种 VPC 接口端点服务,可将特定服务公开给跨账户的数千个 VPC - AWS ClassicLink (deprecated) to connect EC2-classic instances privately to your VPC
AWS ClassicLink(已弃用)用于将 EC2-classic 实例私有连接到您的 VPC
AWS VPN
- AWS Site-to-Site VPN connection is created to communicate between your remote network and Amazon VPC over the internet
AWS Site-to-Site VPN 连接用于在互联网上与您的远程网络和 Amazon VPC 进行通信 - VPN connection: A secure connection between your on-premises equipment and your Amazon VPCs.
VPN 连接:您的本地设备与 Amazon VPC 之间的安全连接。 - VPN tunnel: An encrypted link where data can pass from the customer network to or from AWS. Each VPN connection includes two VPN tunnels which you can simultaneously use for high availability.
VPN 隧道:一条加密链路,数据可以通过该链路从客户网络传输到 AWS 或从 AWS 传输到客户网络。每个 VPN 连接包含两条 VPN 隧道,您可以同时使用这两条隧道以实现高可用性。 - Customer gateway: An AWS resource that provides information to AWS about your customer gateway device.
客户网关:一个 AWS 资源,用于向 AWS 提供有关您的客户网关设备的信息。 - Customer gateway device: A physical device or software application on the customer side of the Site-to-Site VPN connection.
客户网关设备:站点到站点 VPN 连接客户端的物理设备或软件应用程序。 - Virtual private gateway: The VPN concentrator on the Amazon side of the Site-to-Site VPN connection. You use a virtual private gateway or a transit gateway as the gateway for the Amazon side of the Site-to-Site VPN connection.
虚拟专用网关:Site-to-Site VPN 连接的 Amazon 端上的 VPN 集中器。您可以使用虚拟专用网关或传输网关作为 Site-to-Site VPN 连接的 Amazon 端上的网关。 - Transit gateway: A transit hub that can be used to interconnect your VPCs and on-premises networks. You use a transit gateway or virtual private gateway as the gateway for the Amazon side of the Site-to-Site VPN connection.
传输网关:一个传输集线器,可用于互连您的 VPC 和本地网络。您可以使用传输网关或虚拟专用网关作为 Site-to-Site VPN 连接的 Amazon 端上的网关。
AWS Direct Connect
- Establish a dedicated private connection from On-premises locations to the AWS VPC network.
从本地位置到 AWS VPC 网络建立专用的私有连接。 - Can access public resources (S3) and private (EC2) on the same connection
可以访问同一连接上的公共资源(S3)和私有资源(EC2) - Provide 1GB to 100GB/s network bandwidth for fast transfer of data from on-premises to Cloud
为本地到云端的数据快速传输提供 1GB 到 100GB/s 的网络带宽 - Not an immediate solution, because it takes a few days to establish a new direction connection
并非即时解决方案,因为建立新的专用连接需要几天时间
| AWS VPN | AWS Direct Connect |
|---|---|
| Over the internet connection 通过互联网连接 |
Over the dedicated private connection 通过专用私有连接 |
| Configured in minutes 数分钟内配置完成 | Configured in days 数天内配置完成 |
| low to modest bandwidth 低至中等带宽 |
high bandwidth 1 to 100 GB/s 高带宽 1 至 100 GB/s |
Amazon API Gateway
- Serverless, Create and Manage APIs that act as a front door for back-end systems running on EC2, AWS Lambda, etc.
无服务器,创建和管理充当 EC2、AWS Lambda 等后端系统前端的 API。 - API Gateway Types - HTTP, WebSocket, and REST
API 网关类型 - HTTP、WebSocket 和 REST - Allows you to track and control the usage of API. Set throttle limit (default 10,000 req/s) to prevent being overwhelmed by too many requests and returns 429
Too Many Requesterror response. It uses the bucket-token algorithm where the burst size is the max bucket size. For a throttle limit of 10000 req/s and a burst of 5000 requests, if 8000 requests are coming in the first millisecond, then 5000 are served immediately and throttle the rest 3000 in the one-second period.
允许您跟踪和控制 API 的使用情况。设置节流限制(默认为 10,000 req/s)以防止被过多请求压垮,并返回 429Too Many Request错误响应。它使用令牌桶算法,其中突发大小是桶的最大大小。对于 10,000 req/s 的节流限制和 5,000 次请求的突发,如果在第一毫秒内有 8,000 次请求,则立即处理 5,000 次请求,并在该一秒周期内限制其余 3,000 次请求。 - Caching can be enabled to cache your API response to reduce the number of API calls and improve latency
可以启用缓存来缓存您的 API 响应,以减少 API 调用次数并提高延迟 - API Gateway Authentication
API Gateway 身份验证- IAM Policy is used for authentication and authorization of AWS users and leverage Sig v4 to pass IAM credential in the request header
IAM 策略用于 AWS 用户的身份验证和授权,并利用 Sig v4 将 IAM 凭证传递到请求头中 - Lambda Authorizer (formerly Custom Authorizer) use lambda for OAuth, SAML or any other 3rd party authentication
Lambda 授权方(以前称为自定义授权方)使用 Lambda 进行 OAuth、SAML 或任何其他第三方身份验证 - Cognito User Pools only provide authentication. Manage your own user pool (can be backed by Facebook, Google, etc.)
Cognito 用户池仅提供身份验证。管理您自己的用户池(可以由 Facebook、Google 等支持)
- IAM Policy is used for authentication and authorization of AWS users and leverage Sig v4 to pass IAM credential in the request header
Amazon CloudFront
- It’s a Content Delivery Network (CDN) that uses AWS edge locations to cache and deliver cached content (such as images and videos)
它是一个内容分发网络(CDN),使用 AWS 边缘站点缓存和分发缓存的内容(例如图片和视频) - CloudFront can cache data from Origin for e.g.
CloudFront 可以缓存来自源的数据,例如:- S3 bucket using OAI (Origin Access Identity) and S3 bucket policy
使用 OAI(源访问标识符)和 S3 存储桶策略的 S3 存储桶 - EC2 or ALB if they are public and security group allows
EC2 或 ALB(如果它们是公共的并且安全组允许)
- S3 bucket using OAI (Origin Access Identity) and S3 bucket policy
- Origin Access Identity (OAI) can be used to restrict the content from S3 origin to be accessible from CloudFront only
源访问标识符 (OAI) 可用于限制来自 S3 源的内容只能从 CloudFront 访问 - supports Geo restriction (Geo-Blocking) to whitelist or blacklist countries that can access the content
支持地理位置限制(地理封锁),可将允许或禁止访问内容的国家/地区列入白名单或黑名单 - supports Web download distribution (static, dynamic web content, video streaming) and RTMP Streaming distribution (media files from Adobe media server using RTMP protocol)
支持 Web 下载分发(静态、动态 Web 内容、视频流)和 RTMP 流分发(使用 RTMP 协议从 Adobe 媒体服务器获取的媒体文件) - You can generate a Signed URL (for a single file and RTMP streaming) or Signed Cookie (for multiple files) to share content with premium users
您可以生成签名 URL(用于单个文件和 RTMP 流)或签名 Cookie(用于多个文件),与高级用户共享内容 - integrates with AWS WAF, a web application firewall to protect from layer 7 attacks
与 AWS WAF 集成,AWS WAF 是一种 Web 应用程序防火墙,可防御第 7 层攻击 - Objects are removed from the cache upon expiry (TTL), by default 24 hours.
对象在过期(TTL)后会从缓存中移除,默认有效期为 24 小时。 - Invalidate the Object explicitly for web distribution only with the cost associated, which removes the object from CloudFront cache. Otherwise, you can change the object name, and versioning to serve new content.
仅针对 Web 分发明确使对象失效,此操作会产生费用,并将对象从 CloudFront 缓存中移除。否则,您可以更改对象名称和版本来提供新内容。
Amazon Route 53
- AWS Managed Service to create DNS Records (Domain Name System)
用于创建 DNS 记录(域名系统)的 AWS 托管服务 - Browser cache the resolved IP from DNS for TTL (time to live)
浏览器缓存 DNS 解析的 IP 地址,缓存时间由 TTL(生存时间)决定 - Expose public IP of EC2 instances or load balancer
公开 EC2 实例或负载均衡器的公有 IP 地址 - Domain Registrar If you want to use Route 53 for domains purchased from 3rd party websites like GoDaddy.
域名注册商 如果您想将从 GoDaddy 等第三方网站购买的域名用于 Route 53。- AWS - You need to create a Hosted Zone in Route 53
AWS - 您需要在 Route 53 中创建一个托管区域 - GoDaddy - update the 3rd party registrar NS (name server) records to use Route 53.
GoDaddy - 更新第三方注册商的 NS(名称服务器)记录以使用 Route 53。
- AWS - You need to create a Hosted Zone in Route 53
- Private Hosted Zone is used to create an internal (intranet) domain name to be used within Amazon VPC. You can then add some DNS records and routing policies for that internal domain. That internal domain is accessible from EC2 instances or any other resource within VPC.
私有托管区域用于创建将在 Amazon VPC 内部使用的内部(内网)域名。然后,您可以为该内部域名添加一些 DNS 记录和路由策略。该内部域名可从 VPC 内的 EC2 实例或任何其他资源访问。
DNS Record: Type DNS 记录:类型
- CNAME points hostname to any other hostname. Only works with subdomains e.g.
something.mydomain.com
CNAME 将主机名指向任何其他主机名。仅适用于子域,例如something.mydomain.com - A or AAAA (Alias) points hostname to an AWS Resource like ALB, API Gateway, CloudFront, S3 Bucket, Global Accelerator, Elastic Beanstalk, VPC interface endpoint etc. Works with both root-domain and subdomains e.g.
mydomain.com. AAAA is used for IPv6 addresses.
A 或 AAAA(别名)将主机名指向 AWS 资源,例如 ALB、API Gateway、CloudFront、S3 存储桶、Global Accelerator、Elastic Beanstalk、VPC 接口端点等。适用于根域和子域,例如 @ 和 example.com。AAAA 用于 IPv6 地址。
DNS Record: Routing Policy
DNS 记录:路由策略
- Simple to route traffic to specific IP using a single DNS record. Also allows you to return multiple IPs after resolving DNS.
简单地使用单个 DNS 记录将流量路由到特定 IP。还允许您在解析 DNS 后返回多个 IP。 - Weighted to route traffic to different IPs based on weights (between 0 to 255) e.g. create 3 DNS records for weights 70, 20, and 10.
加权,根据权重(0 到 255 之间)将流量路由到不同的 IP,例如为权重 70、20 和 10 创建 3 个 DNS 记录。 - Latency to route traffic to different IPs based on AWS regions nearest to the client for low-latency e.g. create 3 DNS records with region us-east-1, eu-west-2, and ap-east-1
延迟,根据离客户端最近的 AWS 区域将流量路由到不同的 IP 以实现低延迟,例如创建 3 个 DNS 记录,区域分别为 us-east-1、eu-west-2 和 ap-east-1 - Failover to route traffic from Primary to Secondary in case of failover e.g. create 2 DNS records for primary and secondary IP. It is mandatory to create health check for both IP and associate to record.
故障转移(Failover)用于在发生故障时将流量从主节点路由到备用节点,例如为主节点和备用节点 IP 创建 2 个 DNS 记录。必须为两个 IP 创建运行状况检查,并将其与记录关联。 - Geolocation to route traffic to specific IP based on user geolocation (select Continent or Country). Should also create default (select Default location) policy in case there’s no match on location.
地理位置(Geolocation)根据用户的地理位置(选择大洲或国家/地区)将流量路由到特定的 IP。还应创建默认(选择默认位置)策略,以防位置不匹配。 - Geoproximity to route traffic to specific IP based on user geolocation and bias value. Positive bias (1 to 99) for more traffic and negative bias (-1 to -99) for less traffic. You can control the traffic from specific geolocation using bias value.
地理邻近性(Geoproximity)根据用户的地理位置和偏差值将流量路由到特定的 IP。正偏差值(1 到 99)表示更多流量,负偏差值(-1 到 -99)表示更少流量。您可以使用偏差值控制来自特定地理位置的流量。 - Multivalue Answer to return up to 8 healthy IPs after resolving DNS e.g. create 3 DNS records with an associated health check. Acts as client-side Load Balancer, expect a downtime of TTL, if an EC2 becomes unhealthy.
多值应答(Multivalue Answer)在解析 DNS 后返回多达 8 个健康的 IP,例如创建 3 个带有关联运行状况检查的 DNS 记录。它充当客户端负载均衡器,如果 EC2 实例出现不健康状况,则会遇到 TTL 的停机时间。
DNS Failover DNS 故障转移
- active-active failover when you want all resources to be available the majority of the time. All records have the same name, same type, and same routing policy such as weighted or latency
主动-主动故障转移,当您希望大部分时间所有资源都可用时。所有记录具有相同的名称、相同的类型和相同的路由策略,例如加权或延迟。 - active-passive failover when you have active primary resources and standby secondary resources. You create two records - primary & secondary with failover routing policy
主动-被动故障转移,当您有活动的主资源和备用次资源时。您创建两条记录 - 主记录和次记录,并使用故障转移路由策略。
AWS Global Accelerator
- Global Service 全局服务
- Global Accelerator improves the performance of your application globally by lowering latency and jitter, and increasing throughput as compared to the public internet.
与公共互联网相比,AWS Global Accelerator 可通过降低延迟和抖动、提高吞吐量来改善应用程序的全球性能。 - Use Edge locations and AWS internal global network to find an optimal pathway to route the traffic.
利用边缘站点和 AWS 内部全球网络,找到路由流量的最佳路径。 - First, you create a global accelerator, which provisions two anycast static IP addresses.
首先,您创建一个全局加速器,它会配置两个 Anycast 静态 IP 地址。 - Then you register one or more endpoints with Global Accelerator. Each endpoint can have one or more AWS resources such as NLB, ALB, EC2, S3 Bucket or Elastic IP.
然后,您向 Global Accelerator 注册一个或多个终结点。每个终结点可以有一个或多个 AWS 资源,例如 NLB、ALB、EC2、S3 存储桶或 Elastic IP。 - You can set the weight to choose how much traffic is routed to each endpoint.
您可以设置权重来选择流量路由到每个终结点的比例。 - Within the endpoint, global accelerator monitors health checks of all AWS resources to send traffic to healthy resources only
在终结点内部,Global Accelerator 会监控所有 AWS 资源的运行状况检查,以便仅将流量发送到健康的资源。
Management & Governance 管理与治理
Amazon CloudWatch
- CloudWatch is used to collect & track metrics, collect & monitor log files, and set alarms of AWS resources like EC2, ALB, S3, Lambda, DynamoDB, RDS etc.
CloudWatch 用于收集和跟踪指标、收集和监控日志文件以及为 EC2、ALB、S3、Lambda、DynamoDB、RDS 等 AWS 资源设置警报。 - By default, CloudWatch will aggregate and store the metrics at Standard 1-minute resolution. You can set max high-resolution at 1 second.
默认情况下,CloudWatch 会以标准的 1 分钟分辨率聚合和存储指标。您可以将最高分辨率设置为 1 秒。 - CloudWatch dashboard can include graphs from different AWS accounts and regions
CloudWatch 控制面板可以包含来自不同 AWS 账户和区域的图表 - CloudWatch has the following EC2 instance metrics - CPU Utilization %, Network Utilization, and Disk Read Write. You need to set up a custom metric for Memory Utilization, Disk Space Utilization, SwapUtilization etc.
CloudWatch 具有以下 EC2 实例指标:CPU 利用率%、网络利用率以及磁盘读写。您需要为内存利用率、磁盘空间利用率、交换空间利用率等设置自定义指标。 - You need to install CloudWatch Logs Agent on EC2 to collect custom metrics and logs on CloudWatch
您需要在 EC2 上安装 CloudWatch Logs Agent 以在 CloudWatch 中收集自定义指标和日志。 - You can terminate or recover EC2 instances based on CloudWatch Alarm
您可以根据 CloudWatch 告警终止或恢复 EC2 实例。 - You can schedule a Cron job using CloudWatch Events
您可以使用 CloudWatch Events 安排 Cron 作业。 - Any AWS service should have access to
log:CreateLogGroup,log:CreateLogStream, andlog:PutLogEventsactions to write logs to CloudWatch
任何 AWS 服务都应具有访问log:CreateLogGroup、log:CreateLogStream和log:PutLogEvents操作的权限,以便将日志写入 CloudWatch
AWS CloudTrail
- CloudTrail provides audit and event history of all the actions taken by any user, AWS service, CLI, or SDK across AWS infrastructure.
CloudTrail 提供 AWS 基础设施中所有用户、AWS 服务、CLI 或 SDK 所采取操作的审计和事件历史记录。 - CloudTrail is enabled (applied) by default for all regions
CloudTrail 默认在所有区域启用(应用) - CloudTrail logs can be sent to CloudWatch logs or S3 bucket
CloudTrail 日志可以发送到 CloudWatch 日志或 S3 存储桶 - Use case: check in the CloudTrail if any resource is deleted from AWS without anyone’s knowledge.
用例:在 CloudTrail 中检查是否有任何资源在无人知晓的情况下从 AWS 中删除。
AWS CloudFormation
- Infrastructure as Code (IaC). Enable modeling, provisioning, and versioning of your entire infrastructure in a text (.YAML) file
基础设施即代码 (IaC)。使用文本 (.YAML) 文件对您的整个基础设施进行建模、配置和版本控制 - Create, update, or delete your stack of resources using CloudFormation template as a JSON or YAML file
使用 CloudFormation 模板(JSON 或 YAML 文件)创建、更新或删除您的资源堆栈 - CloudFormation template has a following components:-
CloudFormation 模板包含以下组件:- Resources: AWS resources declared in the template (mandatory)
Resources:模板中声明的 AWS 资源(必需) - Parameters: input values to be passed in the template at stack creation time
Parameters:在堆栈创建时传递到模板的输入值 - Mappings: Static variables in the template
Mappings:模板中的静态变量 - Outputs: Output which you want to see once the stack is created e.g. return ElasticIP address after attaching to VPC, return DNS of ELB after stack creation.
输出:堆栈创建后您希望看到的输出,例如,在附加到 VPC 后返回 ElasticIP 地址,在堆栈创建后返回 ELB 的 DNS。 - Conditionals: List of conditions to perform resource creation
条件:执行资源创建的条件列表 - Metadata 元数据
- Template helpers: References and Functions
模板助手:引用和函数
- Resources: AWS resources declared in the template (mandatory)
- Allows DependsOn attribute to specify that the creation of a specific resource follows another
允许 DependsOn 属性指定特定资源的创建遵循另一个资源 - Allows DeletionPolicy attribute to be defined for resources in the template
允许为模板中的资源定义 DeletionPolicy 属性- retain to preserve resources like S3 even after stack deletion
retain 用于在堆栈删除后保留 S3 等资源 - snapshot to backup resources like RDS after stack deletion
snapshot 用于在堆栈删除后备份 RDS 等资源
- retain to preserve resources like S3 even after stack deletion
- Supports Bootstrap scripts to install packages, files and services on the EC2 instances by simply describing them in the template
支持使用 Bootstrap 脚本,只需在模板中进行描述,即可在 EC2 实例上安装软件包、文件和服务 - automatic rollback on error feature is enabled, by default, which will cause all the AWS resources that CF created successfully for a stack up to the point where an error occurred to be deleted
默认情况下,启用了自动回滚功能,这将导致在发生错误之前 CF 成功为堆栈创建的所有 AWS 资源都将被删除 - AWS CloudFormation StackSets allow you to create, update or delete CloudFormation stacks across multiple accounts, regions, OUs in AWS organization with a single operation.
AWS CloudFormation StackSets 允许您通过单次操作跨多个账户、区域、AWS 组织中的 OU 来创建、更新或删除 CloudFormation 堆栈。 - Using CloudFormation itself is free, underlying AWS resources are charged
使用 CloudFormation 本身是免费的,但底层 AWS 资源会收费 - Use case: Use to set up the same infrastructure in different environments e.g. SIT, UAT and PROD. Use to create DEV resources every day in working hours and delete them later to lower the cost
用例:用于在不同环境(例如 SIT、UAT 和 PROD)中设置相同的基础设施。用于在工作时间内每天创建 DEV 资源,之后删除它们以降低成本。
AWS Elastic Beanstalk
- Platform as a Service (PaaS)
平台即服务 (PaaS) - Makes it easier for developers to quickly deploy and manage applications without thinking about underlying resources
使开发人员能够轻松快速地部署和管理应用程序,而无需考虑底层资源 - Automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling and application health monitoring
自动处理容量配置、负载均衡、自动伸缩和应用程序运行状况监控的部署细节 - You can launch an application with the following pre-configured platforms:-
您可以启动具有以下预配置平台的应用程序:- Apache Tomcat for Java applications,
适用于 Java 应用程序的 Apache Tomcat, - Apache HTTP Server for PHP and Python applications
适用于 PHP 和 Python 应用程序的 Apache HTTP 服务器 - Nginx or Apache HTTP Server for Node.js applications
Nginx 或 Apache HTTP 服务器用于 Node.js 应用程序 - Passenger or Puma for Ruby applications
Passenger 或 Puma 用于 Ruby 应用程序 - Microsoft IIS 7.5 for .NET applications
Microsoft IIS 7.5 用于 .NET 应用程序 - Single and Multi Container Docker
单容器和多容器 Docker
- Apache Tomcat for Java applications,
- You can also launch an environment with the following environment tier:-
您还可以使用以下环境层启动环境:- An application that serves HTTP requests runs in a web server environment tier.
在 Web 服务器环境层中运行的应用程序会处理 HTTP 请求。 - A backend environment that pulls tasks from an Amazon Simple Queue Service (Amazon SQS) queue runs in a worker environment tier.
在工作进程环境层中运行的、从 Amazon Simple Queue Service (Amazon SQS) 队列中提取任务的后端环境。
- An application that serves HTTP requests runs in a web server environment tier.
- It costs nothing to use Elastic Beanstalk, only the resources it provisions e.g. EC2, ASG, ELB, and RDS etc.
Elastic Beanstalk 本身不收费,您只需为您配置的资源付费,例如 EC2、ASG、ELB 和 RDS 等。 - supports custom AMI to be used
支持使用自定义 AMI - supports multiple running environments for development, staging and production, etc.
支持开发、暂存和生产等多种运行环境 - supports versioning and stores and tracks application versions over time allowing easy rollback to prior version
支持版本控制,并随时间存储和跟踪应用程序版本,便于回滚到先前版本
AWS ParallelCluster
- Deploy and manage High-Performance Computing (HPC) clusters on AWS using a simple text file
使用简单的文本文件在 AWS 上部署和管理高性能计算 (HPC) 集群 - You have full control of the underlying resources.
您可以完全控制底层资源。 - AWS ParallelCluster is free, and you pay only for the AWS resources needed to run your applications.
AWS ParallelCluster 是免费的,您只需为运行应用程序所需的 AWS 资源付费。 - You can configure HPC cluster with Elastic Fabric Adapter (EFA) to get OS-bypass capabilities for low-latency network communication
您可以配置支持弹性 fabric 适配器 (EFA) 的 HPC 集群,以获得用于低延迟网络通信的操作系统旁路功能
AWS Step Functions (SF)
- Build serverless visual workflow to orchestrate your Lambda functions
构建无服务器可视化工作流来协调您的 Lambda 函数 - You write state machine in declarative JSON, you write a decider program to separate activity steps from decision steps.
您以声明式 JSON 编写状态机,并编写一个决策程序来区分活动步骤和决策步骤。
AWS Simple Workflow Service (SWF)
- Code runs on EC2 (not Serverless)
代码在 EC2 上运行(非无服务器) - Older service. Use SWF when you need external signal signals to intervene in the process or need the child process to pass value to the parent process, otherwise, use Step Functions for new applications.
较旧的服务。当您需要外部信号介入流程或需要子进程将值传递给父进程时,请使用 SWF;否则,新应用程序请使用 Step Functions。
AWS Organization AWS Organizations
- Global service to manage multiple AWS accounts e.g. accounts per department, per cost center, per environment (dev, test, prod)
全局服务,用于管理多个 AWS 账户,例如按部门、按成本中心、按环境(开发、测试、生产)划分的账户 - Pricing benefits from aggregated usage across accounts.
定价受益于跨账户的聚合使用。 - Consolidate billing across all accounts - single payment method
整合所有账户的账单 - 单一付款方式 - Organization has multiple Organization Units (OUs) (or accounts) based on department, cost center or environment, OU can have other OUs (hierarchy)
组织根据部门、成本中心或环境拥有多个组织单位(OU)(或账户),OU 可以拥有其他 OU(层级结构) - Organization has one master account and multiple member accounts
组织有一个主账户和多个成员账户 - You can apply Service Control Policies (SCPs) at OU or account level, SCP is applied to all users and roles in that account
您可以将服务控制策略 (SCP) 应用于组织单位 (OU) 或账户级别,SCP 会应用于该账户中的所有用户和角色 - SPC Deny take precedence over Allow in the full OU tree of an account e.g. allowed at the account level but deny at OU level is = deny
SPC 拒绝优先于允许,在账户的整个 OU 树中,例如,在账户级别允许但在 OU 级别拒绝则为拒绝。 - Master account can do anything even if you apply SCP
主账户可以执行任何操作,即使应用了 SCP - To merge Firm_A Organization with Firm_B Organization
合并 Firm_A 组织与 Firm_B 组织- Remove all member accounts from Firm_A organization
从 Firm_A 组织中移除所有成员账户 - Delete the Firm_A organization
删除 Firm_A 组织 - Invite Firm_A master account to join Firm_B organization as a member account
邀请 Firm_A 主账户作为成员账户加入 Firm_B 组织
- Remove all member accounts from Firm_A organization
- AWS Resource Access Manager (RAM) helps you to create your AWS resources once, and securely share across accounts within OUs in AWS Organization. You can share Transit Gateways, Subnets, AWS License Manager configurations, Route 53 resolver rules, etc.
AWS Resource Access Manager (RAM) 可帮助您一次性创建 AWS 资源,并在 AWS Organization 中的 OU 内跨账户安全地共享。您可以共享 Transit Gateway、子网、AWS License Manager 配置、Route 53 解析器规则等。 - One account can share resources with another individual account within AWS organization with the help of RAM. You must enable resource sharing at AWS Organization level.
借助 RAM,一个账户可以与 AWS Organization 内的另一个独立账户共享资源。您必须在 AWS Organization 级别启用资源共享。 - AWS Control Tower integrated with AWS Organization helps you to quickly setup and configure a new AWS account with best practices from base called as landing zone
AWS Control Tower 与 AWS Organization 集成,可帮助您使用基础(称为着陆区)的最佳实践快速设置和配置新的 AWS 账户。
AWS OpsWorks
- Provide managed instances of Chef and Puppet configuration management services, which help to configure and operate applications in AWS.
提供 Chef 和 Puppet 配置管理服务的托管实例,这些服务有助于在 AWS 中配置和运行应用程序。 - Configuration as Code - OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across EC2 instances using Code.
代码即配置 - OpsWorks 允许您使用 Chef 和 Puppet 通过代码自动执行 EC2 实例的服务器配置、部署和管理。 - OpsWork Stack let you model your application as a stack containing different layers, such as load balancing, database, and application server.
OpsWork Stack 允许您将应用程序建模为包含不同层(如负载均衡、数据库和应用程序服务器)的堆栈。
AWS Glue
- Serverless, fully managed ETL (extract, transform, and load) service
无服务器、完全托管的 ETL(提取、转换、加载)服务 - AWS Glue Crawler scan data from data-source such as S3 or DynamoDB table, determine the schema for data, and then creates metadata tables in the AWS Glue Data Catalog.
AWS Glue Crawler 会扫描 S3 或 DynamoDB 表等数据源中的数据,确定数据的架构,然后创建 AWS Glue 数据目录中的元数据表。 - AWS Glue provides classifiers for CSV, JSON, AVRO, XML or database to determine the schema for data
AWS Glue 提供 CSV、JSON、AVRO、XML 或数据库的分类器来确定数据的架构
Containers 容器
- ECR (Elastic Container Registry) is Docker Hub to pull and push Docker images, managed by Amazon.
ECR (Elastic Container Registry) 是一个由亚马逊管理的、用于拉取和推送 Docker 镜像的 Docker Hub。 - ECS (Elastic Container Service) ECS is a container management service to run, stop, and manage Docker containers on a cluster
ECS (Elastic Container Service) 是一个容器管理服务,用于在集群上运行、停止和管理 Docker 容器。 - ECS Task Definition where you configure task and container definition
ECS 任务定义 (ECS Task Definition) 是您配置任务和容器定义的地方。- Specify ECS Task IAM Role for ECS task (Docker container instance) to access AWS services like S3 bucket or DynamoDB
指定 ECS 任务 IAM 角色 (ECS Task IAM Role),以便 ECS 任务(Docker 容器实例)访问 S3 存储桶或 DynamoDB 等 AWS 服务。 - Specify Task Execution IAM Role i.e.
ecsTaskExecutionRolefor EC2 (ECS Agent) to pull docker images from ECR, make API calls to ECS service and publish container logs to Amazon CloudWatch on your behalf
为 EC2(ECS Agent)指定任务执行 IAM 角色,以便代表您从 ECR 拉取 Docker 镜像、向 ECS 服务发出 API 调用以及将容器日志发布到 Amazon CloudWatch。 - Add container by specifying docker image, memory, port mappings, health-check, etc.
通过指定 Docker 镜像、内存、端口映射、运行状况检查等来添加容器。
- Specify ECS Task IAM Role for ECS task (Docker container instance) to access AWS services like S3 bucket or DynamoDB
- You can create multiple ECS Task Definitions - e.g. one task definition to run a web application on the Nginx server and another task definition to run a microservice on Tomcat.
您可以创建多个 ECS 任务定义,例如,一个任务定义用于在 Nginx 服务器上运行 Web 应用程序,另一个任务定义用于在 Tomcat 上运行微服务。 - ECS Service Definition where you configure cluster, ELB, ASG, task definition, and number of tasks to run multiple similar ECS Task, which deploys a docker container on EC2 instance. One EC2 instance can run multiple ECS tasks.
ECS 服务定义,您可以在其中配置集群、ELB、ASG、任务定义以及要运行的 ECS 任务数量,从而在 EC2 实例上部署多个相似的 ECS 任务。一个 EC2 实例可以运行多个 ECS 任务。 - Amazon EC2 Launch Type: You manage EC2 instances of ECS Cluster. You must install ECS Agent on each EC2 instance. Cheaper. Good for predictable, long-running tasks.
Amazon EC2 启动类型:您管理 ECS 集群的 EC2 实例。您必须在每个 EC2 实例上安装 ECS Agent。成本较低。适用于可预测的、长时间运行的任务。 - ECS Agent The agent sends information about the EC2 instance’s current running tasks and resource utilization to Amazon ECS. It starts and stops tasks whenever it receives a request from Amazon ECS
ECS Agent 该代理将有关 EC2 实例当前运行的任务和资源利用率的信息发送到 Amazon ECS。它在收到来自 Amazon ECS 的请求时启动和停止任务。 - Fargate Launch Type: Serverless, EC2 instances are managed by Fargate. You only manage and pay for container resources. Costlier. Good for variable, short-running tasks
Fargate 启动类型:无服务器,EC2 实例由 Fargate 管理。您只需管理和支付容器资源的费用。成本较高。适用于可变、短时间运行的任务。 - EKS (Elastic Kubernetes Service) is managed Kubernetes clusters on AWS
EKS (Elastic Kubernetes Service) 是 AWS 上的托管 Kubernetes 集群。
Cheat Sheet 备忘单
| AWS Service AWS 服务 | Keywords 关键词 |
|---|---|
| Security 安全 | |
| Amazon CloudWatch | Metrics, Logs, Alarms 指标、日志、告警 |
| AWS CloudTrail | Audit Events 审计事件 |
| AWS WAF | Firewall, SQL injection, Cross-site scripting (XSS), Layer 7 attacks 防火墙、SQL 注入、跨站脚本(XSS)、第 7 层攻击 |
| AWS Shield | DDoS attack, Layer 3 & 4 attacks DDoS 攻击,3 层和 4 层攻击 |
| Amazon Macie | Sensitive Data, Personally Identifiable Information (PII) 敏感数据,个人身份信息 (PII) |
| Amazon Inspector | EC2 Security Assessment, Unintended Network Accessibility EC2 安全评估,意外的网络可访问性 |
| Amazon GuardDuty | Analyze VPC Flow Logs, Threat Detection 分析 VPC 流日志,威胁检测 |
| AWS VPN | Online Network Connection, Long-term Continuous transfer, Low to Moderate Bandwidth 在线网络连接、长期持续传输、低到中等带宽 |
| AWS Direct Connect | Private Secure Dedicated Connection, Long-term Continuous transfer, High Bandwidth 私有安全专用连接、长期持续传输、高带宽 |
| Application Integration 应用程序集成 | |
| Amazon SNS | Serverless, PubSub, Fan-out 无服务器、发布/订阅、扇出 |
| Amazon SQS | Serverless, Decoupled, Queue, Fan-out 无服务器、解耦、队列、扇出 |
| Amazon MQ | ActiveMQ |
| Amazon SWF | Serverless, Simple Workflow Service, Decoupled, Task Coordinator, Distributed & Background Jobs 无服务器、简单工作流服务、解耦、任务协调器、分布式和后台作业 |
| AWS Step Functions (SF) | Orchestrate / Coordinate Lambda functions and ECS containers into a workflow 编排/协调 Lambda 函数和 ECS 容器以构成工作流 |
| AWS OpsWork AWS OpsWorks | Chef & Puppet Chef 和 Puppet |
| Storage 存储 | |
| EBS | Block Storage Volume for EC2 EC2 的块存储卷 |
| EFS | Network File System for EC2, Concurrent access EC2 的网络文件系统,并发访问 |
| Amazon S3 | Serverless, Block Storage - Photos & Videos, Website Hosting 无服务器,块存储 - 照片和视频,网站托管 |
| Amazon Athena | Query data in S3 using SQL 使用 SQL 查询 S3 中的数据 |
| AWS Snow Family AWS Snow 系列 | Offline Data Migration, Petabyte to exabyte Scale 离线数据迁移,PB 到 EB 级 |
| AWS DataSync | Online Data Transfer, Immediate One-time transfer 在线数据传输,即时一次性传输 |
| AWS Storage Gateway | Hybrid Storage b/w On-premise and AWS 本地和 AWS 之间的混合存储 |
| Compute 计算 | |
| AWS Lambda | Serverless, FaaS 无服务器, FaaS |
| Database 数据库 | |
| Amazon RDS | Relational Database - PostgreSQL, MySQL, MariaDB, Oracle, and SQL Server 关系型数据库 - PostgreSQL、MySQL、MariaDB、Oracle 和 SQL Server |
| Amazon Aurora | Relational Database - Amazon-Owned 关系型数据库 - 亚马逊自有 |
| Amazon DynamoDB | Serverless, key-value NoSQL Database - Amazon-Owned 无服务器、键值对 NoSQL 数据库 - 亚马逊自有 |
| Amazon DocumentDB | Document Database, JSON documents - MongoDB 文档数据库,JSON 文档 - MongoDB |
| Amazon Neptune | Graph Database, Social Media Relationship 图数据库,社交媒体关系 |
| Amazon Timestream | Time Series Database 时间序列数据库 |
| Amazon Redshift | Columnar Database, Analytics, BI, Parallel Query 列式数据库、分析、商业智能 (BI)、并行查询 |
| Amazon Elasticache | Redis and Memcached, In-memory Cache Redis 和 Memcached、内存缓存 |
| Amazon EMR | Elastic MapReduce, Big Data - Apache Hadoop, Spark, Hive, Hbase, Flink, Hudi Elastic MapReduce,大数据 - Apache Hadoop、Spark、Hive、Hbase、Flink、Hudi |
| Amazon Elasticsearch Service | Elasticsearch, ELK Elasticsearch,ELK |
| Microservices 微服务 | |
| Elastic Container Registry (ECR) 弹性容器注册表 (ECR) |
Docker image repository, DockerHub Docker 镜像仓库,DockerHub |
| Elastic Container Service (ECS) 弹性容器服务 (ECS) |
Docker container management system Docker 容器管理系统 |
| AWS Fargate | Serverless ECS 无服务器 ECS |
| AWS X-Ray | Trace Request, Debug 跟踪请求、调试 |
| Developer 开发者 | |
| AWS CodeCommit | like GitHub, Git-based Source Code Repository 类似于 GitHub,基于 Git 的源代码存储库 |
| AWS CodeBuild | like Jenkins CI, Code Compile, Build & Test 类似 Jenkins CI、Code Compile、Build & Test |
| AWS CodeDeploy | Code deployment to EC2, Fargate, and Lambda 将代码部署到 EC2、Fargate 和 Lambda |
| AWS CodePipeline | CICD pipelines, Rapid Software or Build Release CI/CD 流水线、快速软件或构建发布 |
| AWS CloudShell | CLI, Browser-based Shell CLI、基于浏览器的 Shell |
| AWS Elastic Beanstalk | PaaS, Quick deploy applications - Java-Tomcat, PHP/Python-Apache HTTP Server, Node.js-Nginx PaaS,快速部署应用程序 - Java-Tomcat,PHP/Python-Apache HTTP Server,Node.js-Nginx |
| Amazon Workspaces | Desktop-as-a-Service, Virtual Windows or Linux Desktops 桌面即服务 (Desktop-as-a-Service),虚拟 Windows 或 Linux 桌面 |
| Amazon AppStream 2.0 | Install Applications on Virtual Desktop and access it from Mobile, Tab or Remote Desktop through Browser 在虚拟桌面上安装应用程序,并通过浏览器从移动设备、平板电脑或远程桌面访问 |
| AWS CloudFormation | Infrastructure as Code, Replicate Infrastructure 基础设施即代码,复制基础设施 |
| AWS Certificate Manager (ACM) | Create, renew, deploy SSL/TLS certificates to CloudFront and ELB 创建、续订 SSL/TLS 证书并将其部署到 CloudFront 和 ELB |
| AWS Migration Hub | Centralized Tracking on the progress of all migrations across AWS AWS 迁移进度的集中跟踪 |
| AWS Glue | Data ETL (extract, transform, load), Crawler, Data Catalogue 数据 ETL(提取、转换、加载)、爬虫、数据目录 |
| AWS AppSync | GraphQL |
| Amazon Elastic Transcoder | Media (Audio, Video) converter 媒体(音频、视频)转换器 |
Important Ports 重要端口
| Protocol/Database 协议/数据库 | Port 端口 |
|---|---|
| FTP | 21 |
| SSH | 22 |
| SFTP | 22 |
| HTTP | 80 |
| HTTPS | 443 |
| RDP | 3389 |
| NFS | 2049 |
| PostgresSQL | 5432 |
| MySQL | 3306 |
| MariaDB | 3306 |
| Aurora | 3306 or 5432 3306 或 5432 |
| Oracle RDS | 1521 |
| MSSQL Server | 1433 |
White Papers 白皮书
Disaster Recovery 灾难恢复
- RPO - Recovery Point Objective - How much data is lost to recover from a disaster e.g. last 20 min data lost before the disaster
恢复点目标 (RPO) - 从灾难中恢复时会丢失多少数据,例如灾难发生前最后 20 分钟的数据丢失 - RTO - Recovery Time Objective - How much downtime require to recover from a disaster e.g. 1-hour downtime to start disaster recovery service
恢复时间目标 (RTO) - 从灾难中恢复需要多少停机时间,例如启动灾难恢复服务需要 1 小时的停机时间 - Disaster Recovery techniques (RPO & RTO reduces and the cost goes up as we go down)
灾难恢复技术 (RPO 和 RTO 越低,成本越高)- Backup & Restore – Data is backed up and restored, with nothing running
备份与恢复 – 备份和恢复数据,期间不运行任何服务 - Pilot light – Only minimal critical service like RDS is running and the rest of the services can be recreated and scaled during recovery
试运行灯(Pilot light)– 仅运行像 RDS 这样的最小关键服务,其余服务可以在恢复期间重新创建和扩展 - Warm Standby – Fully functional site with minimal configuration is available and can be scaled during recovery
温备(Warm Standby)– 提供功能齐全的站点,配置最少,可在恢复期间进行扩展 - Multi-Site – Fully functional site with identical configuration is available and processes the load
多站点(Multi-Site)– 提供功能齐全的站点,配置相同,并处理负载
- Backup & Restore – Data is backed up and restored, with nothing running
- Use Amazon Aurora Global Database for RDS and DynamoDB Global Table for NoSQL databases for disaster recovery with stringent RPO of 1 second and RTO of 1 minute.
对于灾难恢复,请使用 Amazon Aurora Global Database(用于 RDS)和 DynamoDB Global Table(用于 NoSQL 数据库),以实现严格的 1 秒 RPO 和 1 分钟 RTO。
5 Pillars of the AWS Well-Architected Framework
AWS Well-Architected Framework 的 5 大支柱
The 5 Pillars of AWS Well-Architected Framework are as follows:-
AWS Well-Architected Framework 的 5 大支柱如下:
- Operational Excellence 卓越运营
- Use AWS Trusted Advisor to get recommendations on AWS best practices, optimize AWS infrastructure, improve security and performance, reduce costs, and monitor service quotas
使用 AWS Trusted Advisor 获取有关 AWS 最佳实践的建议,优化 AWS 基础架构,提高安全性和性能,降低成本并监控服务配额 - Use Serverless application API Gateway (Front layer for auth, cache, routing), Lambda (Compute), DynamoDB (Database), DAX (Caching), S3 (File Storage) and Cognito User Pools (Auth), CloudFront (Deliver content globally), SES (Send email), SQS & SNS (Publish & Notify events)
使用无服务器应用程序 API Gateway(用于身份验证、缓存、路由的前端)、Lambda(计算)、DynamoDB(数据库)、DAX(缓存)、S3(文件存储)和 Cognito 用户池(身份验证)、CloudFront(全局交付内容)、SES(发送电子邮件)、SQS 和 SNS(发布和通知事件)
- Use AWS Trusted Advisor to get recommendations on AWS best practices, optimize AWS infrastructure, improve security and performance, reduce costs, and monitor service quotas
- Security 安全性
- Use AWS Shield and AWS WAF to prevent network, transport and application layer security attacks
使用 AWS Shield 和 AWS WAF 来防止网络、传输和应用程序层安全攻击
- Use AWS Shield and AWS WAF to prevent network, transport and application layer security attacks
- Reliability 可靠性
- Performance Efficiency 性能效率
- Cost Optimization 成本优化
- Use AWS Cost Explorer to forecast daily or monthly cloud costs based on ML applied to your historical cost
使用 AWS Cost Explorer,基于应用于您历史成本的机器学习来预测每日或每月的云成本。 - Use AWS Budget to set yearly, quarterly, monthly, daily or fixed cost or usage budget for AWS services and get notified when actual or forecast cost or usage exceeds budget limit.
使用 AWS Budgets 来为 AWS 服务设置年度、季度、月度、每日或固定成本或使用量预算,并在实际成本或使用量或预测成本或使用量超出预算限制时收到通知。 - Use AWS Saving Plans to get a discount in exchange for usage commitment e.g. $10/hour for one-year or three-year period. AWS offers three types of Savings Plans – 1. Compute Savings Plans apply to usage across Amazon EC2, AWS Lambda, and AWS Fargate. 2. EC2 Instance Savings Plans apply to EC2 usage, and 3. SageMaker Savings Plans apply to SageMaker usage.
使用 AWS Savings Plans,您可以通过承诺使用量来获得折扣,例如在一年或三年内承诺每小时使用 10 美元。AWS 提供三种类型的 Savings Plans – 1. Compute Savings Plans 适用于 Amazon EC2、AWS Lambda 和 AWS Fargate 的使用。2. EC2 Instance Savings Plans 适用于 EC2 的使用,3. SageMaker Savings Plans 适用于 SageMaker 的使用。 - Use VPC Gateway endpoint to access S3 and DynamoDB privately within AWS network to reduce data transfer cost
使用 VPC Gateway Endpoint 可在 AWS 网络内私有访问 S3 和 DynamoDB,以降低数据传输成本。 - Use AWS Organization for consolidated billing and aggregated usage benefits across AWS accounts
使用 AWS Organizations 实现跨 AWS 账户的统一账单和聚合使用量优惠。
- Use AWS Cost Explorer to forecast daily or monthly cloud costs based on ML applied to your historical cost
Disclaimer 免责声明
I have created the exam notes after watching many training videos and solving tons of practice exam questions. I found that some information given in training videos and practice exams were not correct (or should say not updated). Amazon AWS is growing very fast, they keep enhancing their services with loads of new features as well as introducing new AWS services.
在观看了许多培训视频并解决了大量的模拟试题后,我创建了这份考试笔记。我发现一些培训视频和模拟试题中的信息是不正确的(或者说没有及时更新)。Amazon AWS 发展非常迅速,他们不断通过大量新功能来增强其服务,并推出新的 AWS 服务。
I have personally verified each and every statement in this exam notes from AWS services documentation and FAQs at the time of writing these notes. Please comment and share if you find any statement has become stale or irrelevant after updates in AWS services. Let’s make this exam notes helpful and trustful for all AWS aspirants!
在撰写本笔记时,我已通过 AWS 服务文档和常见问题解答亲自核实了本笔记中的每一项陈述。如果您发现任何陈述在 AWS 服务更新后已过时或不相关,请发表评论并分享。让我们一起努力,让这份考试笔记对所有 AWS 备考者都有帮助且值得信赖!