cheatsheet-aws-A4 icon indicating copy to clipboard operation
cheatsheet-aws-A4 copied to clipboard

:cloud: AWS CheatSheets In A4

  • CheatSheet: Amazon AWS :Cloud: :PROPERTIES: :type: aws, cloud :export_file_name: cheatsheet-aws-A4.pdf :END:

#+BEGIN_HTML

linkedin
github
slack



PRs Welcome #+END_HTML

  • PDF Link: [[https://github.com/dennyzhang/cheatsheet-aws-A4/blob/master/cheatsheet-aws-A4.pdf][cheatsheet-aws-A4.pdf]], Category: [[https://cheatsheet.dennyzhang.com/category/cloud/][Cloud]]
  • Blog URL: https://cheatsheet.dennyzhang.com/cheatsheet-aws-A4
  • Related posts: [[https://cheatsheet.dennyzhang.com/cheatsheet-gcp-A4][GCP CheatSheet]], [[https://cheatsheet.dennyzhang.com/cheatsheet-virtualization-A4][CheatSheet: Cloud Virtualization]], [[https://github.com/topics/denny-cheatsheets][#denny-cheatsheets]]

File me [[https://github.com/dennyzhang/cheatsheet-aws-A4/issues][Issues]] or star [[https://github.com/dennyzhang/cheatsheet-aws-A4][this repo]]. ** AWS CLI Basic | Name | Summary | |----------------------------------------+---------------------------------------| | List all used resources in all regions | [[https://github.com/dennyzhang/cheatsheet-aws-A4/blob/master/INSTRUCT.md#list-all-resources][Github: List all resources]] | | Install aws cli | =pip install awscli=, =aws help= | | Load aws cli profile | =aws configure= | | List regions | =aws ec2 describe-regions= | | List instances | =aws ec2 describe-instances= | | AWS CLI config files | =~/.aws/credentials=, =~/.aws/config= | | Reference | [[https://github.com/awslabs/aws-shell][Github: awslabs/aws-shell]] | ** AWS EC2 | Name | Summary | |-----------------------+----------------------------------------------------------------------------| | List images by amazon | =aws ec2 describe-images=, =aws ec2 describe-images --owners self amazon= | | [[http://docs.aws.amazon.com/cli/latest/reference/ec2/run-instances.html][Run a new instance]] | =aws ec2 run-instances --image-id ami-c3b8d6aa --count 1 --key-name mykey= | ** AWS DNS | Name | Summary | |---------------------------------+----------------------------------------------------------------------------| | List hosted zones | =aws route53 list-hosted-zones= | | List hosted zone by name | =aws route53 list-hosted-zones-by-name --dns-name my.com= | | List DNS records by hosted zone | =aws route53 list-resource-record-sets --hosted-zone-id "/hostedzone/XXX"= | ** AWS ECS | Name | Summary | |------------------------------+-------------| | [[https://linuxacademy.com/community/posts/show/topic/27703-delete-route53-zone-created-by-ecs-service-discovery][Delete Route53 zone From ECS]] | Use aws cli | ** AWS Products - Fundamental | Name | Summary | |----------------------+---------------------------------------------------------------------------------| | [[https://aws.amazon.com/ec2/][AWS EC2]] | Virtual servers | | [[https://aws.amazon.com/esk/][AWS EKS]] | Kubernetes services in AWS | | [[https://aws.amazon.com/ecs/][AWS ECS]] | Docker container service with orchestrated by AWS itself. | | [[https://aws.amazon.com/ebs/][AWS EBS]] | Block storage | | [[https://aws.amazon.com/s3/][AWS S3]] | Object storage | | [[https://aws.amazon.com/rds/][AWS RDS]] | Relational Database: e.g, mysql | | [[https://aws.amazon.com/vpc/][AWS VPC]] | Virtual private cloud: provides networking isolation | | [[https://aws.amazon.com/elasticloadbalancing/][AWS ELB]] | Load balancer | | [[https://aws.amazon.com/cloudwatch/][AWS CloudWatch]] | Monitoring | | [[https://aws.amazon.com/cloudformation/][AWS CloudFormation]] | Create AWS infra in a programming way | | [[https://aws.amazon.com/lambda/][AWS Lambda]] | Function as a service | | [[https://aws.amazon.com/machine-learning/][AWS Machine Learning]] | Build Smart Applications Quickly and Easily | | [[https://aws.amazon.com/outposts/][AWS Outposts]] | Run AWS infrastructure on-premises. And it's fully managed and supported by AWS | | Reference | [[http://docs.aws.amazon.com/general/latest/gr/rande.html][Link: check AWS availability]], [[http://aws.amazon.com/products/][Link: AWS products]] | ** AWS Products - Big Data | Name | Summary | |-----------------+--------------------------------------------| | [[https://aws.amazon.com/kinesis/][AWS Kinesis]] | Real-time processing of streaming Big Data | | [[https://aws.amazon.com/redshift/][AWS Redshift]] | PB-scale Data Warehouse | | [[https://aws.amazon.com/dynamodb/][AWS DynamoDB]] | DynamoDB NoSQL DB service from AWS | | [[https://aws.amazon.com/emr/][AWS EMR]] | Managed Hadoop Framework | | [[https://aws.amazon.com/cloudsearch/][AWS CloudSearch]] | Managed Search Service. e.g, elasticsearch | ** AWS Products - Orchestration | Name | Summary | |--------------------+-----------------------------------------------------------------------| | [[https://aws.amazon.com/step-functions/][AWS Step functions]] | Orachestration for serverless workflows | | [[https://aws.amazon.com/sqs/][AWS SQS]] | Queue Service | | [[https://aws.amazon.com/sns/][AWS SNS]] | Notification Service | | [[http://docs.aws.amazon.com/opsworks/latest/userguide/welcome.html][AWS OpsWorks]] | Configuration management service. e.g, chef, puppet | | [[https://aws.amazon.com/elasticbeanstalk/][AWS Beanstalk]] | Enable you to easily deploy and manage your application in the cloud. | | [[https://aws.amazon.com/codedeploy/][AWS CodeDeploy]] | Automated Deployments | | [[https://aws.amazon.com/swf/][AWS SWF]] | Workflow Service for Coordinating Application Components | | [[https://aws.amazon.com/datapipeline/][AWS Data Pipeline]] | Orchestration for Data-Driven Workflows | ** AWS Products - Enterprise | Name | Summary | |----------------+----------------------------------------------------------------------------| | [[https://aws.amazon.com/workspaces/][AWS WorkSpaces]] | Desktops in the Cloud | | [[https://aws.amazon.com/workdocs/][AWS WorkDocs]] | Secure Enterprise Storage and Sharing Service. e.g, Office 365, google doc | | [[https://aws.amazon.com/workmail/][AWS WorkMail]] | Secure Email and Calendaring Service. e.g, gmail and google calendar | ** AWS Products - Mobile | Name | Summary | |----------------------+--------------------------------------------------------| | [[https://aws.amazon.com/cognito/][AWS Cognito]] | User Identity and App Data Synchronization | | [[https://aws.amazon.com/mobileanalytics/][AWS Mobile Analytics]] | Mobile usage data analysis | | [[https://aws.amazon.com/appstream2/][AWS AppStream]] | Low Latency Application Streaming. e.g, video watching | ** AWS Products - More | Name | Summary | |--------------------+-------------------------------------------------------| | [[https://aws.amazon.com/elasticache/][AWS ElasticCache]] | Caching service. e.g, memcache, redis. | | [[https://aws.amazon.com/route53/][AWS Route53]] | DNS | | [[https://aws.amazon.com/cloudfront/][AWS CloudFront]] | CDN | | [[https://aws.amazon.com/cloudhsm/][AWS CloudHSM]] | Hardware Security Module | | [[https://aws.amazon.com/cloudtrail/][AWS CloudTrail]] | User Activity and Change Tracking | | [[https://aws.amazon.com/ses][AWS SES]] | Send emails | | [[https://aws.amazon.com/glacier/][AWS Glacier]] | Archive storage. Backend by tape | | [[https://aws.amazon.com/codecommit/][AWS CodeCommit]] | Host Git repos | | [[https://aws.amazon.com/ec2/vm-import/][AWS Import/Export]] | Import your on-prem VMs to AWS public cloud | | [[https://aws.amazon.com/devpay/][AWS DevPay]] | Online billing service | | [[https://aws.amazon.com/autoscaling/][AWS AutoScaling]] | Monitor your applications, then scale out or scale in | | [[https://aws.amazon.com/lightsail/][AWS Lightsail]] | VPS. More raw than EC2 | | [[https://aws.amazon.com/directconnect/][AWS Direct Connect]] | Dedicated Network Connection to AWS | | [[https://aws.amazon.com/lex/][AWS Lex]] | Conversational interfaces for your applications | #+BEGIN_HTML #+END_HTML ** AWS VPC | Name | Summary | |------------------------------+-------------------------------------------------------------------------------------------------| | IGW(Internet gateway) | An Internet gateway enables your instances to connect to the Internet | | VPG(Virtual Private Gateway) | The Amazon VPC side of a VPN Connection | | [[https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html][NAT Gateway]] | Enable instances in a private subnet to connect to the internet or other AWS services | | Customer Gateway | Your side of a VPN Connection | | NAT | NAT maps multiple private IP addresses to a single public IP address. | | NAT Instance | EC2 instances provide Port Address Translation for non-EIP instances to access Internet via IGW | | Router | Routers interconnect subnets and direct traffic between IGW, VPG, NAT instances and Subnets | | Subnet | A segment of a VPC's IP address range where you can place groups of isolated resources | | VPC Peering | A networking connection between two VPCs enable traffic by private IP | | ClassicLink | Allow you to link an EC2-Classic instance to a VPC in your account, within the same region | ** More Resources License: Code is licensed under [[https://www.dennyzhang.com/wp-content/mit_license.txt][MIT License]].

http://docs.aws.amazon.com/cli/latest/index.html

https://www.expeditedssl.com/aws-in-plain-english

#+BEGIN_HTML

#+END_HTML

  • org-mode configuration :noexport: #+STARTUP: overview customtime noalign logdone showall #+DESCRIPTION: #+KEYWORDS: #+LATEX_HEADER: \usepackage[margin=0.6in]{geometry} #+LaTeX_CLASS_OPTIONS: [8pt] #+LATEX_HEADER: \usepackage[english]{babel} #+LATEX_HEADER: \usepackage{lastpage} #+LATEX_HEADER: \usepackage{fancyhdr} #+LATEX_HEADER: \pagestyle{fancy} #+LATEX_HEADER: \fancyhf{} #+LATEX_HEADER: \rhead{Updated: \today} #+LATEX_HEADER: \rfoot{\thepage\ of \pageref{LastPage}} #+LATEX_HEADER: \lfoot{\href{https://github.com/dennyzhang/cheatsheet-aws-A4}{GitHub: https://github.com/dennyzhang/cheatsheet-aws-A4}} #+LATEX_HEADER: \lhead{\href{https://cheatsheet.dennyzhang.com/cheatsheet-aws-A4}{Blog URL: https://cheatsheet.dennyzhang.com/cheatsheet-aws-A4}} #+AUTHOR: Denny Zhang #+EMAIL: [email protected] #+TAGS: noexport(n) #+PRIORITIES: A D C #+OPTIONS: H:3 num:t toc:nil \n:nil @:t ::t |:t ^:t -:t f:t *:t <:t #+OPTIONS: TeX:t LaTeX:nil skip:nil d:nil todo:t pri:nil tags:not-in-toc #+EXPORT_EXCLUDE_TAGS: exclude noexport #+SEQ_TODO: TODO HALF ASSIGN | DONE BYPASS DELEGATE CANCELED DEFERRED #+LINK_UP: #+LINK_HOME:
  • [#A] Amazon IAM & Security :noexport:IMPORTANT: AWS shared responsibility model

With IAM, you can centrally manage users, security credentials such as passwords, access keys, and permissions policies that control which AWS services and resources users can access. ** pdf: AWS Security Best Practices http://media.amazonwebservices.com/AWS_Security_Best_Practices.pdf ** AWS IAM Use cases http://aws.amazon.com/iam/ | Num | Use scenario | |-----+---------------------------------------------------------------------------| | 1 | Fine-grained access control to AWS resources | | 2 | Manage access control for mobile applications with Web Identity Providers | | 3 | Integrate with your corporate directory | | 4 | Multi-Factor Authentication for highly privileged users | ** [#A] Types of AWS Credentials http://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html | Credential Type | Use Scenario | Description | |--------------------+--------------------------------------+----------------------------------------------------| | Passwords | Used to login into AWS mgmt Console | A string of characters | | MFA | Used to login into AWS mgmt Console | A six-digit single-use code with your password | | Access Keys | Digitally signed requests to AWS API | Includes an access key ID and a secret access key. | |--------------------+--------------------------------------+----------------------------------------------------| | Key Pairs | SSH login to EC2 instances; | 1024-bit SSH-2 RSA keys. | | | CloudFront signed URLs | | |--------------------+--------------------------------------+----------------------------------------------------| | X.509 Certificates | Digitally signed SOAP requests to | It's only used to sign SOAP-based requests. | | | AWS APIs; HTTPS certificates | The certificate file contains your public key in | | | | a base64-encoded DER certificate body. | *** TODO Difference between "Access Keys" and "Key Pairs" ** How to Extend AWS IAM

  • Federation: Access AWS with your existing corporate identity http://www.slideshare.net/AmazonWebServices/delegating-access-to-your-aws-environment-sec303-aws-reinvent-2013 http://aws.amazon.com/iam/details/manage-federation/

  • Become IAM partners to offer SSO capabilities http://aws.amazon.com/iam/partners/

  • Customize PutPolicy action https://developers.coinbase.com/blog/2015/03/30/self-service-iam Self-Service Cloud Security with Amazon IAM - Coinbase Developers Blog

    http://awsadvent.tumblr.com/post/104927334172/aws-advent-2014-integrating-aws-with-active aws advent - AWS Advent 2014 - Integrating AWS with Active Directory ** Key Concepts of AWS IAM http://aws.amazon.com/iam/ | Concepts | Summary | |-------------+-----------------------------------------------------| | Users | Create individual users. | | Groups | Manage permissions with groups. | | Permissions | Grant least privilege. | | Auditing | Turn on AWS CloudTrail. | | Password | Configure a strong password policy. | | MFA | Enable MFA for privileged users. | | Roles | Use IAM roles for EC2 instances. | | Sharing | Use IAM roles to share access. | | Rotate | Rotate security credentials regularly. | | Conditions | Restrict privileged access further with conditions. | | Root | Reduce/remove use of root. | ** When should I use an IAM user, IAM group or IAM role? http://aws.amazon.com/iam/faqs/

  • An IAM user has permanent long-term credentials and is used to directly interact with AWS services.

  • An IAM group is primarily a management convenience to manage the same set of permissions for a set of IAM users.

  • An IAM role is an AWS Identity and Access Management (IAM) entity with permissions to make AWS service requests.

    IAM roles cannot make direct requests to AWS services, they are meant to be "assumed" by authorized entities, such as IAM users, applications or AWS services like EC2. IAM roles are used to delegate access within or between AWS accounts. ** [#A] Two different permission models: User-based and Resource-based http://docs.aws.amazon.com/IAM/latest/UserGuide/policies_permissions.html

  • User-based permissions are attached to an IAM user, group, or role and let you specify what that user, group, or role can do. For example, you can assign permissions to the IAM user named Bob, stating that he has permission to use the Amazon Elastic Compute Cloud (Amazon EC2) RunInstances action and that he has permission to get items from an Amazon DynamoDB table named MyCompany. The user Bob might also be granted access to manage his own IAM security credentials. User-based permissions can be managed or inline.

  • Resource-based permissions are attached to a resource. You can specify resource-based permissions for Amazon S3 buckets, Amazon Glacier vaults, Amazon SNS topics, Amazon SQS queues, and AWS Key Management Service encryption keys. Resource-based permissions let you specify who has access to the resource and what actions they can perform on it. Resource-based policies are inline only, not managed. ** Limitation for AWS IAM Limiation | Name | Limitation | |------------------------------------+----------------------------------------------------| | MFA devices in use per user | 1 | | MFA devices in use per AWS account | 1 | | Signing certificates per user | 2 | | Roles per instance profiles | 1 | | Access keys per user | 2 | |------------------------------------+----------------------------------------------------| | Server certificate count | Up to 250 server certificates for one AWS account. | | Roles count | Up to 250 IAM roles for one AWS account. | | Number of groups per user | 10 | |------------------------------------+----------------------------------------------------| | Users per AWS account | 5000 | | Groups per AWS account | 100 | ** AWS Security Token Service (AWS STS) http://docs.aws.amazon.com/STS/latest/APIReference/Welcome.html http://aws.amazon.com/code/1288653099190193

#+BEGIN_EXAMPLE The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). #+END_EXAMPLE ** Policy Definition

  • Example: Allow Users to Access a Specific Bucket in Amazon S3 http://docs.aws.amazon.com/IAM/latest/UserGuide/policies_examples.html #+BEGIN_EXAMPLE { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:ListAllMyBuckets", "Resource": "arn:aws:s3:::" }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation" ], "Resource": "arn:aws:s3:::EXAMPLE-BUCKET-NAME" }, { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject" ], "Resource": "arn:aws:s3:::EXAMPLE-BUCKET-NAME/" } ] } #+END_EXAMPLE
  • Policy Simulator: https://policysim.aws.amazon.com ** # --8<-------------------------- separator ------------------------>8-- ** todo How LADP is enforced? ** Federation: Access AWS with your existing corporate identity http://www.slideshare.net/AmazonWebServices/delegating-access-to-your-aws-environment-sec303-aws-reinvent-2013 http://aws.amazon.com/iam/details/manage-federation/

Why use Federation:

  • SSO to the AWS Management Console
  • Build apps that transparently access AWS resources and APIs
  • Eliminate "yet another password" to manage ** # --8<-------------------------- separator ------------------------>8-- ** [#B] Security provided by AWS The AWS cloud infrastructure has been architected to be one of the most flexible and secure cloud computing environments available today. It provides an extremely scalable, highly reliable platform that enables customers to deploy applications and data quickly and securely. Not only are your applications and data protected by highly secure facilities and infrastructure, but they're also protected by extensive network and security monitoring systems. These systems provide basic but important security measures such as distributed denial of service (DDoS) protection and password brute-force detection on AWS Accounts. Additional security measures include:

Secure access - Customer access points, also called API endpoints, allow secure HTTP access (HTTPS) so that you can establish secure communication sessions with your AWS services using SSL. Built-in firewalls - You can control how accessible your instances are by configuring built-in firewall rules - from totally public to completely private, or somewhere in between. And when your instances reside within a Virtual Private Cloud (VPC) subnet, you can control egress as well as ingress. Unique users - The AWS Identity and Access Management (IAM) tool allows you to control the level of access your own users have to your AWS infrastructure services. With AWS IAM, each user can have unique security credentials, eliminating the need for shared passwords or keys and allowing the security best practices of role separation and least privilege. Multi-factor authentication (MFA) - AWS provides built-in support for multi-factor authentication (MFA) for use with AWS Accounts as well as individual IAM user accounts. Private Subnets - The AWS Virtual Private Cloud (VPC) service allows you to add another layer of network security to your instances by creating private subnets and even adding an IPsec VPN tunnel between your home network and your AWS VPC. Encrypted data storage - Customers can have the data and objects they store in Amazon S3, Glacier, Redshift, and Oracle RDS encrypted automatically using Advanced Encryption Standard (AES) 256, a secure symmetric-key encryption standard using 256-bit encryption keys. Dedicated connection option - The AWS Direct Connect service allows you to establish a dedicated network connection from your premise to AWS. Using industry standard 802.1q VLANs, this dedicated connection can be partitioned into multiple logical connections to enable you to access both public and private IP environments within your AWS cloud. Security logs - AWS CloudTrail provides logs of all user activity within your AWS account. You can see what actions were performed on each of your AWS resources and by whom. Isolated GovCloud - For customers who require additional measures in order to comply with US ITAR regulations, AWS provides an entirely separate region called AWS GovCloud (US) that provides an environment where customers can run ITAR-compliant applications, and provides special endpoints that utilize only FIPS 140-2 encryption. CloudHSM - For customers who must use Hardware Security Module (HSM) appliances for cryptographic key storage, AWS CloudHSM provides a highly secure and convenient way to store and manage keys. Trusted Advisor - Provided automatically when you sign up for premium support, the Trusted Advisor service is a convenient way for you to see where you could use a little more security. It monitors AWS resources and alerts you to security configuration gaps such as overly permissive access to certain EC2 instance ports and S3 storage buckets, minimal use of role segregation using IAM, and weak password policies.

http://aws.amazon.com/security/ ** # --8<-------------------------- separator ------------------------>8-- ** TODO Some Highlights :noexport:

  • Security model for S3 object is the most complicated ones.
  • AWS Roles is different from our normal understanding. ** basic info | Name | Summary | |------------------+----------------------------------| | Policy Simulator | https://policysim.aws.amazon.com |

Frequently used Policy | Name | Summary | |---------------------+---------| | PowerUserAccess | | | ReadOnlyAccess | | | AdministratorAccess | |

Network Security Considerations | Type | Summary | |----------------------------+---------| | DDos | | | MITM(Man in the middle) | | | IP Spoofing | | | Unauthorized Port Scanning | | | Package Sniffing | | | Configuration Managment | | ** DONE IAM Principle

  • DO NOT use root credentials. https://cloudnative.io/blog/2015/01/aws-iam-best-practices/

  • Recommend to use IAM to implement a least privilege security strategy.

There's no way to control root's password policy, expiration or root's permissions. ** DONE [#A] Difference between SAML, OpenID and OAuth :IMPORTANT: CLOSED: [2015-04-09 Thu 16:38] http://www.softwaresecured.com/2013/07/16/federated-identities-openid-vs-saml-vs-oauth/

| Name | Summary | |--------+----------------------------------------| | OpenID | single sign-on for consumers | | SAML | single sign-on for enterprise users | | OAuth | API authorization between applications |

Federated identity: link and use the electronic identities a user has across several identity management systems.

There are three major protocols for federated identity: OpenID, SAML, and OAuth. ** DONE AWS enable people to ping current machine: security group: All ICMP, allowed CLOSED: [2016-04-28 Thu 21:52]

  • [#A] AWS Lambda :noexport: AWS Lambda is an event-driven task compute service that runs your code in response to "events" such as changes in data, website clicks, or messages from other AWS services without you having to manage any compute infrastructure. ** TODO Lambda asynchronous execution ** TODO List all my AWS resource in Python ** HALF How to invoke lambda functions actively, not via triggers http://docs.aws.amazon.com/lambda/latest/dg/with-dynamodb-create-function.html#with-dbb-invoke-manually

http://docs.aws.amazon.com/lambda/latest/dg/lambda-introduction-function.html#java-invocation-options AWS Lambda supports synchronous and asynchronous invocation of a Lambda function.

POST /2015-03-31/functions/FunctionName/invocations?Qualifier=Qualifier HTTP/1.1 X-Amz-Invocation-Type: InvocationType X-Amz-Log-Type: LogType X-Amz-Client-Context: ClientContext ** HALF When will AWS drop the AWS Lambda container? http://docs.aws.amazon.com/lambda/latest/dg/lambda-introduction.html It takes time to set up a container and do the necessary bootstrapping, which adds some latency each time the Lambda function is invoked. You typically see this latency when a Lambda function is invoked for the first time or after it has been updated because AWS Lambda tries to reuse the container for subsequent invocations of the Lambda function.

After a Lambda function is executed, AWS Lambda maintains the container for some time in anticipation of another Lambda function invocation.

#+BEGIN_EXAMPLE So AWS will start a container to run my lambda function. And reuse it, if I call it again in a short period.

I'm interested how long AWS will delete the container on average. Like 30 seconds, 30 minutes or 2 hours, etc? #+END_EXAMPLE ** HALF Examples of How to Use AWS Lambda http://docs.aws.amazon.com/lambda/latest/dg/use-cases.html The use cases for AWS Lambda can be grouped into the following categories:

Using AWS Lambda with AWS services as event sources - Event sources publish events that cause the Lambda function to be invoked. These can be AWS services such as Amazon S3. For more information and tutorials, see the following topics: Using AWS Lambda with Amazon S3 Using AWS Lambda with Kinesis Using AWS Lambda with Amazon DynamoDB Using AWS Lambda with AWS CloudTrail Using AWS Lambda with Amazon SNS from Different Accounts On-demand Lambda function invocation over HTTPS (Amazon API Gateway) - In addition to invoking Lambda functions using event sources, you can also invoke your Lambda function over HTTPS. You can do this by defining a custom REST API and endpoint using API Gateway. For more information and a tutorial, see Using AWS Lambda with Amazon API Gateway (On-Demand Over HTTPS). On-demand Lambda function invocation (build your own event sources using custom apps) - User applications such as client, mobile, or web applications can publish events and invoke Lambda functions using the AWS SDKs or AWS Mobile SDKs, such as the AWS Mobile SDK for Android. For more information and a tutorial, see Getting Started and Using AWS Lambda as Mobile Application Backend (Custom Event Source: Android) Scheduled events - You can also set up AWS Lambda to invoke your code on a regular, scheduled basis using the AWS Lambda console. You can specify a fixed rate (number of hours, days, or weeks) or you can specify a cron expression. For more information and a tutorial, see Using AWS Lambda with Scheduled Events. ** # --8<-------------------------- separator ------------------------>8-- :noexport: ** DONE Understand Context Object in the function signature CLOSED: [2017-10-12 Thu 14:02] http://docs.aws.amazon.com/lambda/latest/dg/python-context-object.html

While a Lambda function is executing, it can interact with the AWS Lambda service to get useful runtime information such as:

  • How much time is remaining before AWS Lambda terminates your Lambda function (timeout is one of the Lambda function configuration properties).
  • The CloudWatch log group and log stream associated with the Lambda function that is executing.
  • The AWS request ID returned to the client that invoked the Lambda function. You can use the request ID for any follow up inquiry with AWS support.
  • If the Lambda function is invoked through AWS Mobile SDK, you can learn more about the mobile application calling the Lambda function.

#+BEGIN_EXAMPLE Copy from future import print_function

import time def get_my_log_stream(event, context): print("Log stream name:", context.log_stream_name) print("Log group name:", context.log_group_name) print("Request ID:",context.aws_request_id) print("Mem. limits(MB):", context.memory_limit_in_mb) # Code will execute quickly, so we add a 1 second intentional delay so you can see that in time remaining value. time.sleep(1) print("Time remaining (MS):", context.get_remaining_time_in_millis())

#+END_EXAMPLE ** DONE How to protect my functions, in case malicious client calls them? CLOSED: [2017-10-12 Thu 13:22] http://docs.aws.amazon.com/lambda/latest/dg/with-dynamodb-create-function.html#with-dbb-invoke-manually

aws lambda invoke
--invocation-type RequestResponse
--function-name ProcessDynamoDBStream
--region us-east-1
--payload file://file-path/input.txt
--profile adminuser
outputfile.txt ** DONE How Does AWS Lambda Run My Code? CLOSED: [2017-10-12 Thu 13:05] http://docs.aws.amazon.com/lambda/latest/dg/lambda-introduction.html When a Lambda function is invoked, AWS Lambda launches a container ** DONE try AWS python lambda CLOSED: [2017-10-12 Thu 12:34] http://docs.aws.amazon.com/lambda/latest/dg/python-programming-model-handler-types.html *** TODO how to add my module ** AWS Lambda pricing: First 1 million requests per month are free https://aws.amazon.com/lambda/pricing/ ** DONE [#A] AWS CLI create Lambda function cannot unzip uploaded file: zip is too big CLOSED: [2017-10-17 Tue 07:38] https://stackoverflow.com/questions/35235118/aws-cli-create-lambda-function-cannot-unzip-uploaded-file https://stackoverflow.com/questions/43724185/could-not-unzip-uploaded-file-on-creation-of-lambda-function-using-python https://forums.aws.amazon.com/thread.jspa?threadID=225033

  • AWS region :noexport: http://docs.aws.amazon.com/general/latest/gr/rande.html Auto Scaling

| Region Name | Region | Endpoint | |---------------------------+----------------+------------------------------------------| | US East (N. Virginia) | us-east-1 | autoscaling.us-east-1.amazonaws.com | | US West (Oregon) | us-west-2 | autoscaling.us-west-2.amazonaws.com | | US West (N. California) | us-west-1 | autoscaling.us-west-1.amazonaws.com | | EU (Ireland) | eu-west-1 | autoscaling.eu-west-1.amazonaws.com | | EU (Frankfurt) | eu-central-1 | autoscaling.eu-central-1.amazonaws.com | | Asia Pacific (Singapore) | ap-southeast-1 | autoscaling.ap-southeast-1.amazonaws.com | | Asia Pacific (Sydney) | ap-southeast-2 | autoscaling.ap-southeast-2.amazonaws.com | | Asia Pacific (Tokyo) | ap-northeast-1 | autoscaling.ap-northeast-1.amazonaws.com | | South America (Sao Paulo) | sa-east-1 | autoscaling.sa-east-1.amazonaws.com |

  • [#A] Amazon Services :noexport:IMPORTANT: http://www.allthingsdistributed.com [[ file:/Users/mac/Dropbox/private_data/emacs_stuff/images/aws_services.png]]

| Name | Link | |------------------------------+-------------------------------------------------------------| | Architecture | http://aws.amazon.com/architecture/ | | Release notes for components | http://aws.amazon.com/releasenotes/Amazon-DynamoDB?browse=1 | | Service Health Dashboard | http://status.aws.amazon.com | | AWS Overview | http://d0.awsstatic.com/whitepapers/aws-overview.pdf |

https://www.webassessor.com/wa.do?page=publicHome&branding=AMAZON ** DONE AWS CloudHSM (Hardware Security Module) CLOSED: [2015-04-02 Thu 13:43] http://aws.amazon.com/cloudhsm/

  • CloudHSM instances are provisioned inside your VPC with an IP address that you specify

  • By protecting your keys in hardware and preventing them from being accessed by third parties, AWS CloudHSM can help you comply with the most stringent regulatory and contractual requirements for key protection. ** # --8<-------------------------- separator ------------------------>8-- ** TODO Programming with AWS APIs ** TODO Shared Security Responsibility Model ** TODO CIA and AAA models, ingress vs. egress filtering, and which AWS services and features fit ** TODO What's RAID 10 You have been tasked with identifying an appropriate storage solution for a NoSQL database that requires random I/O reads of greater than 100,000 4kB IOPS. Which EC2 option will meet this requirement? A. EBS provisioned IOPS B. SSD instance store C. EBS optimized instances D. High Storage instance configured in RAID 10 ** TODO What's NACL? Instance A and instance B are running in two different subnets A and B of a VPC. Instance A is not able to ping instance B. What are two possible reasons for this? (Pick 2 correct answers) A. The routing table of subnet A has no target route to subnet B B. The security group attached to instance B does not allow inbound ICMP traffic C. The policy linked to the IAM role on instance A is not configured correctly D. The NACL on subnet B does not allow outbound ICMP traffic ** TODO how to configure and troubleshoot a VPC inside and out, including basic IP subnetting. http://www.rightbrainnetworks.com/blog/tips-for-passing-amazon-aws-certified-solutions-architect-exam/ ** TODO the difference in use cases between Simple Workflow (SWF), Simple Queue Services (SQS), and Simple Notification Services (SNS). http://www.rightbrainnetworks.com/blog/tips-for-passing-amazon-aws-certified-solutions-architect-exam/ ** TODO how to properly use various EBS volume configurations and snapshots to optimize I/O performance and data durability. http://www.rightbrainnetworks.com/blog/tips-for-passing-amazon-aws-certified-solutions-architect-exam/ ** # --8<-------------------------- separator ------------------------>8-- ** TODO Jeff Bar's (AWS evangelist) book on AWS is pretty good. ** TODO [#A] Linuxacademy.com has excellent training. ** TODO [#A] Ryans course from Udemy: AWS Certified Solutions Architect - Associate 2015 https://www.udemy.com/aws-certified-solutions-architect-associate-2015/?couponCode=crunchadeal&siteID=AfpokvaRFDA-rxdyA2VpHcsL0YfWSKDw7g&LSNPUBID=AfpokvaRFDA https://www.youtube.com/watch?v=-V2w3VfTxGE ** TODO CBT nuggets has tutorials. ** TODO check AWS certificate: developer and architecture http://aws.amazon.com/certification/

http://aws.amazon.com/certification/certified-devops-engineer-professional/ AWS Certified DevOps Engineer - Professional *** AWS Certified Developer http://aws.amazon.com/certification/certified-developer-associate/ AWS Certified Developer - Associate

https://www.webassessor.com/wa.do?page=publicHome&branding=AMAZON *** TODO mail: AWS Certification Test Taker Account Confirmation :noexport: [[gnus:mail.misc#1555549283.56240.1427341627643.JavaMail.root@prodmq][Email from [email protected] (Wed, 25 Mar 2015 20:47:07 -0700 (MST)): AWS Certification Test Taker A]] #+begin_example From: [email protected] Subject: AWS Certification Test Taker Account Confirmation To: [email protected] Date: Wed, 25 Mar 2015 23:47:07 -0400

Dear Denny,

Thank you for registering for an Amazon Web Services (AWS) Certification Test Taker account. You can use this account to schedule and take AWS certification exams.

Account Login: [email protected]

Scheduling an Exam To schedule your exam, follow these steps:

  1. Go to http://www.webassessor.com/amazon
  2. Log into your account
  3. Click "Register for an Exam" in the upper right corner
  4. Select the exam you want to take and click "Buy Now"
  5. Find the testing center where you want to take the exam and click "Select"
  6. Select the date and time for your exam appointment
  7. Review and acknowledge the important terms for scheduling your exam
  8. Click "Select", then "Continue"
  9. Complete payment information and click "Submit"

Questions? Do not reply to this email. If you have questions, please contact us.

Thanks, AWS Training & Certification

#+end_example *** AWS Certified SysOps Administrator - Associate http://aws.amazon.com/certification/certified-sysops-admin-associate/ ** # --8<-------------------------- separator ------------------------>8-- ** TODO data persistent of VPC EC2 http://awstrainingandcertification.s3.amazonaws.com/production/AWS_certified_solutions_architect_associate_examsample.pdf Which of the following will occur when an EC2 instance in a VPC (Virtual Private Cloud) with an associated Elastic IP is stopped and started? (Choose 2 answers) A. The Elastic IP will be dissociated from the instance B. All data on instance-store devices will be lost C. All data on EBS (Elastic Block Store) devices will be lost D. The ENI (Elastic Network Interface) is detached E. The underlying host for the instance is changed ** TODO How to build and use a threat model http://awstrainingandcertification.s3.amazonaws.com/production/AWS_certified_solutions_architect_associate_blueprint.pdf ** TODO CIA and AAA models, ingress vs. egress filtering http://awstrainingandcertification.s3.amazonaws.com/production/AWS_certified_solutions_architect_associate_blueprint.pdf ** TODO Incorporating common conventional security products (Firewall, IDS:HIDS/NIDS, SIEM, VPN) http://awstrainingandcertification.s3.amazonaws.com/production/AWS_certified_solutions_architect_associate_blueprint.pdf ** TODO DDOS mitigation ** TODO IAM http://surajbatuwana.blogspot.com.au/p/aws-certification-sample-questions.html Every user you create in the IAM system starts with ______. A partial permissions B full permissions C no permissions

Can you create IAM security credentials for existing users? A Yes, existing users can have security credentials associated with their account. -- B No, IAM requires that all users who have credentials set up are not existing users C No, security credentials are created within GROUPS, and then users are associated to GROUPS at a later time. D Yes, but only IAM credentials, not ordinary security credentials. ** TODO Can we attach an EBS volume to more than one EC2 instance at the same time http://surajbatuwana.blogspot.com.au/p/aws-certification-sample-questions.html Can we attach an EBS volume to more than one EC2 instance at the same time? A No B Yes. C Only EC2-optimized EBS volumes. D Only in read mode. ** TODO mysql storage engine: InnoDB and MyISAM http://surajbatuwana.blogspot.com.au/p/aws-certification-sample-questions.html

Amazon RDS automated backups and DB Snapshots are currently supported for only the ______ storage engine A InnoDB B MyISAM ** TODO [#A] Difference among: Elastic Beanstalk, OpsWorks, CloudFormation, CodeDeploy ** Trusted Advisor: optimize cost and potential issues https://aws.amazon.com/premiumsupport/trustedadvisor/

AWS Trusted Advisor provides best practices in four categories: cost optimization, security, fault tolerance, and performance improvement. ** # --8<-------------------------- separator ------------------------>8--

  • [#A] Amazon EC2: Virtual Servers in the Cloud :noexport:IMPORTANT:
  • We don't charge hourly usage for a stopped instance, or data transfer fees, but we do charge for the storage for any Amazon EBS volumes.

Limiation: | Name | Comment | |--------------------------+-----------------------------------------------------------| | Reserved Instance counts | 20 instance reservations per Availability Zone, per month | | Elastic IP | By default, all AWS accounts are limited to 5 EIPs | | EC2 availability | 99.95% | | EBS availability | 99.95% |

EC2 supports two types of block devices: | Name | Summary | |------------------------+----------------------------------------------------------------------------------| | Instance store volumes | underlying hardware is physically attached to the host computer for the instance | | EBS volumes | remote storage devices |

  • Consideration of cloud components | Key points | EC2 example | |--------------------------------------+-------------| | What detail feature it provides | | | What's the limitation and trade-off | | | Procedure to scale up and scale down | | | Downtime during scale | | |--------------------------------------+-------------| | How to avoid SPOF | | | How to do backup: EBS snapshot, AMI | | | Downtime during backup | | ** TODO Why EC spot instances doesn't have a delay shutdown with 5 min? ** # --8<-------------------------- separator ------------------------>8-- ** TODO [#A] storage device will be decommission eventually, but will it still be reused? Once again your customers are concerned about the security of their sensitive data and with their latest enquiry ask about what happens to old storage devices on AWS. What would be the best answer to this question?

AWS uses a 3rd party security organisation to destroy data as part of the decommissioning process. AWS reformats the disks and uses them again. AWS uses the techniques detailed in DoD 5220.22-M to destroy data as part of the decommissioning process. AWS uses their own proprietary software to destroy data as part of the decommissioning process.

C

When a storage device has reached the end of its useful life, AWS procedures include a decommissioning process that is designed to prevent customer data from being exposed to unauthorized individuals. AWS uses the techniques detailed in DoD 5220.22-M ("National Industrial Security Program Operating Manual ") or NIST 800-88 ("Guidelines for Media Sanitization") to destroy data as part of the decommissioning process. All decommissioned magnetic storage devices are degaussed and physically destroyed in accordance with industry-standard practices.

http://d0.awsstatic.com/whitepapers/Security/AWS%20Security%20Whitepaper.pdf ** TODO [#A] What's the availability for instance store volumes? https://news.ycombinator.com/item?id=2470298 ** TODO [#A] Why it doens't take longer to snapshot an entire 16 TB volume as compared to an entire 1 TB volume? http://aws.amazon.com/ebs/faqs/

Q: Does it take longer to snapshot an entire 16 TB volume as compared to an entire 1 TB volume ?

No, an EBS Snapshot of an entire 16 TB volume is designed to take no longer than the time it takes to snapshot an entire 1 TB volume. ** # --8<-------------------------- separator ------------------------>8-- ** TODO [#A] Difference between "Create Image" and "Take Snapshot" ** TODO EC2 how to check when an EC2 instance is last started/restarted? ** TODO Difference between ECU and vCPU? http://www.sudops.com/amazon-ecu-vs-vcpu.html ** TODO [#A] When EC2 terminate spot instances, will it notify users/VM to allow it do some cleanup? ** TODO [#B] How VM Enhanced Networking implemented? https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html Amazon EC2 supports enhanced networking capabilities using single root I/O virtualization (SR-IOV).

To enable enhanced networking on your instance, you must ensure that its kernel has the ixgbevf module installed and that you set the sriovNetSupport attribute for the instance. ** TODO [#A] How AWS optmized IPO implemented: General Purpose (SSD) volumes, Provisioned IOPS (SSD) volumes ** TODO Difference between "dedicated instances" and "single tenant option" ** # --8<-------------------------- separator ------------------------>8-- ** DONE Feature: Enable termination protection CLOSED: [2015-05-04 Mon 10:31] You can protect instances from being accidentally terminated. Once enabled, you won't be able to terminate this instance via the API or the AWS Management Console until termination protection has been disabled. ** DONE Feature: Shutdown behavior CLOSED: [2015-05-04 Mon 10:32] Specify the instance behavior when an OS-level shutdown is performed. Instances can be either terminated or stopped. ** DONE [#B] Feature: Placement Groups: enables applications to participate in a low-latency, 10 Gbps network. CLOSED: [2015-05-06 Wed 12:03] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html

  • If you stop an instance in a placement group and then start it again, it still runs in the placement group.
  • A placement group can't span multiple Availability Zones.
  • Not all of the instance types that can be launched into a placement group
  • You can't merge placement groups.
  • A placement group can span peered VPCs
  • You can't move an existing instance into a placement group. *** DONE A placement group can span peered VPCs CLOSED: [2015-05-06 Wed 12:45] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html

A placement group can span peered VPCs; however, you will not get full-bisection bandwidth between instances in peered VPCs. For more information about VPC peering connections, see VPC Peering in the Amazon VPC User Guide. ** # --8<-------------------------- separator ------------------------>8-- ** DONE [#A] Difference between Reboot, Stop and Terminate CLOSED: [2015-05-02 Sat 18:11] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-lifecycle.html#lifecycle-differences | Characteristic | Reboot | Stop/start (Amazon EBS-backed instances only) | Terminate | |----------------------------+-------------------------------------------+--------------------------------------------------------------------+------------------------------------------------| | Host computer | The instance stays on | The instance runs on a new host computer | None | | | the same host computer | | | |----------------------------+-------------------------------------------+--------------------------------------------------------------------+------------------------------------------------| | Private and public | These addresses stay the same | EC2-Classic: The instance gets new private and public IP addresses | | | . IP addresses | | EC2-VPC: The instance keeps its private IP address. | | | | | The instance gets a new public IP address, | | | | | unless it has an Elastic IP address (EIP), | | | | | which doesn't change during a stop/start. | | |----------------------------+-------------------------------------------+--------------------------------------------------------------------+------------------------------------------------| | Elastic IP addresses (EIP) | The EIP remains associated | EC2-Classic: The EIP is disassociated from the instance | The EIP is disassociated from the instance | | | with the instance | EC2-VPC: The EIP remains associated with the instance | | |----------------------------+-------------------------------------------+--------------------------------------------------------------------+------------------------------------------------| | Instance store volumes | The data is preserved | The data is erased | The data is erased | |----------------------------+-------------------------------------------+--------------------------------------------------------------------+------------------------------------------------| | Root device volume | The volume is preserved | The volume is preserved | The volume is deleted by default | |----------------------------+-------------------------------------------+--------------------------------------------------------------------+------------------------------------------------| | Billing | The instance billing hour doesn't change. | You stop incurring charges for an instance | You stop incurring charges for an instance | | | | as soon as its state changes to stopping. | as soon as its state changes to shutting-down. | | | | Each time an instance transitions from stopped to pending, | | | | | we start a new instance billing hour. | | ** DONE [#A] Elastic IP Addresses :IMPORTANT: CLOSED: [2015-04-14 Tue 17:27] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html

  • By default, all AWS accounts are limited to 5 EIPs, because public (IPv4) Internet addresses are a scarce public resource.

  • EIP is useful, because DNS propagation take time

| Characteristic | EC2-Classic | EC2-VPC | |----------------+------------------------------------------------------------------+--------------------------------------------------------------------------------------------------| | Allocation | When you allocate an EIP, it's for use only in EC2-Classic. | When you allocate an EIP, it's for use only in a VPC. | | Association | You associate an EIP with an instance. | An EIP is a property of an elastic network interface (ENI). | | | | You can associate an EIP with an instance by updating the ENI attached to the instance. | |----------------+------------------------------------------------------------------+--------------------------------------------------------------------------------------------------| | Reassociation | If you try to associate an EIP that's already associated | If your account supports EC2-VPC only, and you try to associate an EIP that's already | | | with another instance, the address is automatically | associated with another instance, the address is automatically associated with the new instance. | | | associated with the new instance. | If you're using a VPC in an EC2-Classic account, and you try to associate an EIP that's already | | | | associated with another instance, it succeeds only if you allowed reassociation. | |----------------+------------------------------------------------------------------+--------------------------------------------------------------------------------------------------| | Instance stop | If you stop an instance, its EIP is disassociated, | If you stop an instance, its EIP remains associated. | | | and you must re-associate the EIP when you restart the instance. | | | Multiple IP | Instances support only a single private IP address | Instances support multiple IP addresses, and each one can have a corresponding EIP. | | | and a corresponding EIP. | |

Elastic IP addresses are static IP addresses designed for dynamic cloud computing. However, unlike traditional static IP addresses, Elastic IP addresses enable you to mask instance or Availability Zone failures by programmatically remapping your public IP addresses to instances in your account in a particular region. For DR, you can also pre -allocate some IP addresses for the most critical systems so that their IP addresses are already known before disaster strikes. This can simplify the execution of the DR plan. *** DONE [#A] Why we need EIP, instead of normal public IP? :IMPORTANT: CLOSED: [2015-04-14 Tue 17:17] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html If you use dynamic DNS to map an existing DNS name to a new instance's public IP address, it might take up to 24 hours for the IP address to xopropagate through the Internet. As a result, new instances might not receive traffic while terminated instances continue to receive requests. To solve this problem, use an EIP. *** DONE Charge for Elastic IP CLOSED: [2015-04-15 Wed 14:03] Elastic IP Addresses - You can have one Elastic IP (EIP) address associated with a running instance at no charge. ** DONE [#A] Feature: what happen if we stop a running instance CLOSED: [2015-04-01 Wed 00:14] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Stop_Start.html When you stop a running instance, the following happens:

The instance performs a normal shutdown and stops running; its status changes to stopping and then stopped.

Any Amazon EBS volumes remain attached to the instance, and their data persists.

Any data stored in the RAM of the host computer or the instance store volumes of the host computer is gone.

EC2-Classic: We release the public and private IP addresses for the instance when you stop the instance, and assign new ones when you restart it.

EC2-VPC: The instance retains its private IP addresses when stopped and restarted. We release the public IP address and assign a new one when you restart it.

EC2-Classic: We disassociate any Elastic IP address (EIP) that's associated with the instance. You're charged for Elastic IP addresses that aren't associated with an instance. When you restart the instance, you must associate the Elastic IP address with the instance; we don't do this automatically.

EC2-VPC: The instance retains its associated Elastic IP addresses (EIP). You're charged for any Elastic IP addresses associated with a stopped instance.

When you stop and restart a Windows instance, by default, we change the instance host name to match the new IP address and initiate a reboot. By default, we also change the drive letters for any attached Amazon EBS volumes. For more information about these defaults and how you can change them, see Configuring a Windows Instance Using the EC2Config Service in the Amazon EC2 User Guide for Microsoft Windows Instances.

If you've registered the instance with a load balancer, it's likely that the load balancer won't be able to route traffic to your instance after you've stopped and restarted it. You must de-register the instance from the load balancer after stopping the instance, and then re-register after starting the instance. For more information, see De-Registering and Registering Amazon EC2 Instances in the Elastic Load Balancing Developer Guide.

When you stop a ClassicLink instance, it's unlinked from the VPC to which it was linked. You must link the instance to the VPC again after restarting it. For more information about ClassicLink, see ClassicLink. ** DONE [#A] For Reserved Instances, if I stop it for several hours, will I be still charged? CLOSED: [2015-04-06 Mon 16:33] http://aws.amazon.com/ec2/purchasing-options/reserved-instances/

When you are comparing TCO, we highly recommend that you use the Reserved Instance (RI) pricing option in your calculations. They will provide the best apples-to-apples TCO comparison between on-premises and cloud infrastructure. Reserved Instances are similar to on-premises servers because in both cases, there is a one-time upfront cost. However, unlike on-premises servers, Reserved Instances can be "purchased" and provisioned within minutes-and you have the flexibility to turn them off when you don't need them and stop paying the hourly rate.

Rserved Instance is a pricing model. If you buy an Reserved Instances, but no running instances match this model or no instances running, you're still charged every hour. ** [#A] If you restarted an instance N times within one hour, you will be charged for N full hours. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Stop_Start.html

When you stop an instance, we shut it down. We don't charge hourly usage for a stopped instance, or data transfer fees, but we do charge for the storage for any Amazon EBS volumes. Each time you start a stopped instance we charge a full instance hour, even if you make this transition multiple times within a single hour.

--8<-------------------------- separator ------------------------>8--

A user has launched an EBS backed instance. The user started the instance at 9 AM in the morning. Between 9 AM to 10 AM, the user is testing some script. Thus, he stopped the instance twice and restarted it. In the same hour the user rebooted the instance once. For how many instance hours will AWS charge the user?

4 hours 3 hours 1 hour 2 hours

B A user can stop/start or reboot an EC2 instance using the AWS console, the Amazon EC2 CLI or the Amazon EC2 API. Rebooting an instance is equivalent to rebooting an operating system. When the instance is rebooted AWS will not charge the user for the extra hours. In case the user stops the instance, AWS does not charge the running cost but charges only the EBS storage cost. If the user starts and stops the instance multiple times in a single hour, AWS will charge the user for every start and stop. In this case, since the instance was rebooted twice, it will cost the user for 3 instance hours. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-lifecycle.html#lifecycle-differences ** DONE restart won't charge an extra hour, while stop/start will CLOSED: [2015-05-02 Sat 14:25]

A user has launched an EBS backed EC2 instance. What will be the difference while performing the restart or stop/start options on that instance?

For restart it charges extra only once, while for every stop/start it will be charged as a separate hour For restart it does not charge for an extra hour, while every stop/start it will be charged as a separate hour Every restart is charged by AWS as a separate hour, while multiple start/stop actions during a single hour will be counted as a single hour For every restart or start/stop it will be charged as a separate hour

B

For an EC2 instance launched with an EBS backed AMI, each time the instance state is changed from stop to start/ running, AWS charges a full instance hour, even if these transitions happen multiple times within a single hour. Anyway, rebooting an instance AWS does not charge a new instance billing hour. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-lifecycle.html ** # --8<-------------------------- separator ------------------------>8-- ** DONE [#B] EC2 Spot Instances :IMPORTANT: CLOSED: [2015-03-31 Tue 15:25] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html

  • Spot prices are typically far below (recently 86% lower, on average) On Demand prices

Spot Instances can significantly lower their Amazon EC2 costs for use cases like batch processing, scientific research, image processing, video encoding, data and web crawling, financial analysis, and testing.

  • The key differences between Spot Instances and On-Demand instances are that Spot Instances might not start immediately.

  • Amazon EC2 adjusts the Spot Price periodically as requests come in and available supply changes.

  • To use Spot Instances, you place a Spot Instance request specifying the maximum price you are willing to pay per instance hour.

    If your maximum price bid exceeds the current Spot Price, your request is fulfilled and your instances will run until either you choose to terminate them or the Spot Price increases above your maximum price (whichever is sooner).

  • If you're running Spot Instances and your maximum price no longer meets or exceeds the current Spot Price, your instances will be terminated.

http://aws.amazon.com/ec2/purchasing-options/spot-instances/ How Can Spot Instances Optimize My EC2 Utilization?

Using Spot Instances can generate savings that you can keep, invest elswhere, or pass on to your customers. Because Spot prices are typically far below (recently 86% lower, on average) On Demand prices, you can lower the cost of your interruption-tolerant tasks and, potentially, accelerate those applications when there are many Spot Instances available.

There are four general categories of time-flexible and interruption-tolerant tasks that work well with Spot Instances: Optional tasks. These tasks are nice-to-have but not strictly required. When Spot prices are low, you can run your optional tasks, and when they rise too high you can stop them. Delayable tasks. These tasks have deadlines that allow you to be flexible about when you run your computations (e.g., weekly batch jobs or media transcoding). Acceleratable tasks. These tasks can be sped up by adding additional computing power. You can run Spot Instances to accelerate your computing when the Spot price is low while maintaining a baseline layer of On-Demand or Reserved Instances (e.g., using Spot task nodes and On-Demand master and core nodes in an Elastic MapReduce job). Large scale tasks. These tasks may require computing scale that you can't access any other way. With Spot, you can cost-effectively run thousands or more instances in AWS regions around the world. ** DONE Amazon EC2 Reserved Instances CLOSED: [2015-03-31 Tue 17:28] http://aws.amazon.com/ec2/purchasing-options/reserved-instances/

  • Reserved Instances provide you with a significant discount (up to 75%) compared to On-Demand Instance pricing.
  • AWS offers Reserved Instances for 1 or 3 year terms.
  • With the All Upfront option, you pay entirely with one upfront payment. Then you will get largest discount

Savings Comparison of 1 Year Reserved Instances over On-Demand Instances

| Utilization Rate | On-Demand | 1 Year Medium | 1 Year Heavy | | | | Utilization | Utilization | |------------------+-----------+---------------+--------------| | 10% | $122.98 | -234% | -525% | | 20% | $245.95 | -86% | -212% | | 30% | $368.93 | -37% | -108% | | 40% | $491.90 | -13% | -56% | | 50% | $614.88 | 2% | -25% | | 60% | $737.86 | 12% | -4% | | 70% | $860.83 | 19% | 11% | | 80% | $983.81 | 24% | 22% | | 90% | $1,106.78 | 28% | 31% | | 100% | $1,229.76 | 31% | 38% | Utilization Rate = % of time your instance is running; Prices shown for US East Region as of July 20th 2014 ** TODO What does AWS dedicated instances mean: one physical server for one VM? my VMs hosted in one server? my VMs shared with people who bought dedicated server? ** DONE Amazon EC2 Dedicated Instances? CLOSED: [2015-03-31 Tue 17:50] http://aws.amazon.com/ec2/purchasing-options/dedicated-instances/ https://aws.amazon.com/blogs/aws/amazon-ec2-dedicated-instances/ https://www.cloudyn.com/blog/moving-to-dedicated-instances-in-aws/ https://gigaom.com/2014/04/22/the-use-of-amazons-dedicated-cloud-instances-may-be-on-rise-but-does-that-make-sense/ http://blog.trendmicro.com/dedicated-servers-vs-the-new-amazon-ec2-dedicated-instance/

  • AWS dedicated instances are instances that do not share hardware with other AWS accounts.

  • A dedicated per region fee (note that you pay this once per hour regardless of how many Dedicated Instances you're running).

https://gigaom.com/2014/04/22/the-use-of-amazons-dedicated-cloud-instances-may-be-on-rise-but-does-that-make-sense/

  • But now, 9 months after price cuts, 0.5 percent of the instances it monitors are dedicated. (Cloudyn said it has eyes on 8 percent of total AWS workloads.)

Real reasons behind such a move:

  • Compliance: for one reason or another, an organization may have certain restrictions and requirements of where data is placed and its accessibility. Having dedicated instances with your own hardware provides you peace of mind that no other organization, company or deployment will be running alongside

  • Performance: while mostly theoretical, having dedicated hardware for your use only can avoid other deployment which may utilize or use your instances in one way or another, thus reducing performance. Some companies wish to avoid such noisy neighbors using their pool.

  • If recalled, Netflix wished to avoid such neighbors, so they upgraded to the largest available instances, which ended up being dedicated since no one else could use them. ** # --8<-------------------------- separator ------------------------>8-- ** DONE [#A] EC2 tags: categorize your AWS resources in a flexible way CLOSED: [2015-04-01 Wed 13:20] http://docs.aws.amazon.com/cli/latest/userguide/cli-ec2-launch.html

  • Tags enable you to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. You can use tags to organize your AWS bill to reflect your own cost structure.

  • You can't terminate, stop, or delete a resource based solely on its tags; you must specify the resource identifier.

    For example, to delete snapshots that you tagged with a tag key called DeleteMe, you must first get a list of those snapshots using DescribeSnapshots with a filter that specifies the tag.

Adding a Name Tag to Your Instance

To add the tag Name=MyInstance to your instance, use the create-tags command as follows:

$ aws ec2 create-tags --resources i-xxxxxxxx --tags Key=Name,Value=MyInstance The following is example output:

{ "return": "true" } For more information, see Tagging Your Resources in the Amazon EC2 User Guide for Linux Instances. ** [#A] difference between EBS backed AMI vs S3-Backed AMI :IMPORTANT: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html http://blog.magpiebrain.com/2010/07/19/aws-s3-vs-ebs-backed-instances/

  • Amazon EC2 instance store-backed AMIs can't be stopped, they're either running or terminated. | Characteristic | Amazon EBS-Backed | Amazon Instance Store-Backed | |-----------------------+----------------------------------------------------+---------------------------------------------| | Boot time | Usually less than 1 minute | Usually less than 5 minutes | | Size limit | 1 TiB | 10 GiB | | Root device volume | Amazon EBS volume | Instance store volume | |-----------------------+----------------------------------------------------+---------------------------------------------| | Data persistence | By default, the root volume is deleted | Data on any instance store volumes persists | | | when the instance terminates. Data on any | only during the life of the instance. Data | | | other Amazon EBS volumes persists after | on any Amazon EBS volumes persists after | | | instance termination by default. Data on | instance termination by default. | | | any instance store volumes persists only | | | | during the life of the instance. | | |-----------------------+----------------------------------------------------+---------------------------------------------| | Upgrading | The instance type, kernel, RAM disk, and user | Instance attributes are fixed for the life | | | data can be changed while the instance is stopped. | of an instance. | |-----------------------+----------------------------------------------------+---------------------------------------------| | Charges | You're charged for instance usage, Amazon EBS | You're charged for instance usage and | | | volume usage, and storing your AMI as an Amazon | storing your AMI in Amazon S3. | | | EBS snapshot. | | |-----------------------+----------------------------------------------------+---------------------------------------------| | AMI creation/bundling | Uses a single command/call | Requires installation and use of AMI tools | | Stopped state | Can be placed in stopped state where instance | Cannot be in stopped state; instances are | | | is not running, but the root volume is persisted | running or terminated | | | in Amazon EBS | | ** EC2 instance flavor *** DONE EC2 Instances: t2, m3, r3, c3, m4... CLOSED: [2015-04-05 Sun 18:09] http://aws.amazon.com/ec2/instance-types/ http://www.ec2instances.info

| Name | Type | Summary | |------+-------------------+-------------------------------------------------------------------------------------| | T2 | General Purpose | Good for no need the full CPU often or consistently, but occasionally need to burst | | M3 | General Purpose | a balance of compute, memory, and network resources | |------+-------------------+-------------------------------------------------------------------------------------| | C3 | Compute Optimized | | | C4 | Compute Optimized | | |------+-------------------+-------------------------------------------------------------------------------------| | R3 | Memory Optimized | | |------+-------------------+-------------------------------------------------------------------------------------| | G2 | GPU | For graphics and general purpose GPU compute applications | |------+-------------------+-------------------------------------------------------------------------------------| | I2 | Storage Optimized | High I/O Instances | | D2 | Storage Optimized | Dense-storage Instances | *** DONE Amazon EC2 instances are grouped into 10 families CLOSED: [2015-04-13 Mon 19:41] http://aws.amazon.com/ec2/faqs/

Amazon EC2 instances are grouped into 10 families

first and second generation Standard instances, High-Memory, High-CPU, Cluster Compute, Cluster GPU, High I/O, Dense-storage, High Memory Cluster, and t1.micro.

| Name | Summary | |---------------------------+---------------------------------------------------------------------------------| | Standard Instances | memory to CPU ratios suitable for most general purpose applications | | Second Standard Instances | provide higher absolute CPU performance for CPU intensive applications | | High-Memory instances | offer larger memory sizes for memory-intensive applications | | High-CPU instances | have proportionally more CPU resources than memory (RAM) | | Cluster Compute Instances | large computational power coupled with a high performance network. Good for HPC | | Cluster GPU instances | NVIDIA Tesla GPUs for high performance parallel computing | | High I/O instances | very high, low latency, I/O capacity using SSD-based local instance storage | | Dense-storage instances | high storage density and sequential I/O performance | | t1.micro instances | | *** DONE M1 VS M3 Standard instances: choose M3 for most cases CLOSED: [2015-04-13 Mon 19:43] http://aws.amazon.com/ec2/faqs/

  • M3 instances provide better, more consistent performance that M1 instances for most use-cases.
  • M3 instances also offer SSD-based instance storage
  • M3 instances are also less expensive than M1 instances.

However, if you need more disk storage than what is provided in M3 instances, you may still find M1 instances useful for running your applications. #+BEGIN_EXAMPLE Q: M1 and M3 Standard instances have the same ratio of CPU and memory. When should I use one instance over the other?

M3 instances provide better, more consistent performance that M1 instances for most use-cases. M3 instances also offer SSD-based instance storage that delivers higher I/O performance. M3 instances are also less expensive than M1 instances. Due to these reasons, we recommend M3 for applications that require general purpose instances with a balance of compute, memory, and network resources. However, if you need more disk storage than what is provided in M3 instances, you may still find M1 instances useful for running your applications.

#+END_EXAMPLE ** # --8<-------------------------- separator ------------------------>8-- ** DONE EC2 default login users CLOSED: [2015-04-01 Wed 14:32] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/TroubleshootingInstancesConnecting.html

  • For an Amazon Linux AMI, the user name is ec2-user.
  • For a RHEL5 AMI, the user name is either root or ec2-user.
  • For an Ubuntu AMI, the user name is ubuntu.
  • For a Fedora AMI, the user name is either fedora or ec2-user.
  • For SUSE Linux, the user name is root.
  • Otherwise, if ec2-user and root don't work, check with the AMI provider ** DONE When we take a snapshot of EC2 VM, will it also snapshot the attached volume: yes CLOSED: [2015-04-14 Tue 10:33] http://aws.amazon.com/ec2/faqs/
  • Snapshots only capture data that has been written to your Amazon EBS volume

Q: Do volumes need to be un-mounted in order to take a snapshot? Does the snapshot need to complete before the volume can be used again?

No, snapshots can be done in real time while the volume is attached and in use. However, snapshots only capture data that has been written to your Amazon EBS volume, which might exclude any data that has been locally cached by your application or OS. In order to ensure consistent snapshots on volumes attached to an instance, we recommend cleanly detaching the volume, issuing the snapshot command, and then reattaching the volume. For Amazon EBS volumes that serve as root devices, we recommend shutting down the machine to take a clean snapshot. ** DONE How do Dense-storage instances compare to High I/O instances? CLOSED: [2015-04-14 Tue 11:13] http://aws.amazon.com/ec2/faqs/ Q. How do Dense-storage instances compare to High I/O instances?

High I/O instances (I2) are targeted at workloads that demand low latency and high random I/O in addition to moderate storage density and provide the best price/IOPS across other EC2 instance types. Dense-storage instances (D2) are optimized for applications that require high sequential read/write access and low cost storage for very large data sets and provide the best price/GB-storage and price/disk-throughput across other EC2 instances. ** DONE TRIM command for SSD performance tunning CLOSED: [2015-04-14 Tue 11:07] http://aws.amazon.com/ec2/faqs/ The TRIM command allows the operating system to inform SSDs which blocks of data are no longer considered in use and can be wiped internally.

The TRIM command allows the operating system to inform SSDs which blocks of data are no longer considered in use and can be wiped internally. In the absence of TRIM, future write operations to the involved blocks can slow down significantly. Currently HI1.4xlarge instances do not support TRIM, but TRIM support will be deployed within the next few months. Customers with extremely intensive full LBA random write workloads should plan accordingly. Please note that the current disk provisioning scheme for High I/O instances minimizes the impact of write amplification and most customers will not experience any issues. ** DONE device of EC2 question: /dev/sda1 CLOSED: [2015-04-15 Wed 11:28] http://surajbatuwana.blogspot.com.au/p/aws-certification-sample-questions.html Select the most correct answer: The device name /dev/sda1 (within Amazon EC2 ) is _____ A Possible for EBS volumes B Reserved for the root device C Recommended for EBS volumes D Recommended for instance store volumes

The answer is B http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/device_naming.html

What does specifying the mapping /dev/sdc=none when launching an instance do? A Prevents /dev/sdc from creating the instance. B Prevents /dev/sdc from deleting the instance. C Set the value of /dev/sdc to 'zero'. D Prevents /dev/sdc from attaching to the instance. ** DONE EC2 role question CLOSED: [2015-04-15 Wed 12:02] AWS Certified SysOps Administrator Associate Practice Exam Time Remaining: 02:51 9 of 20. You need to provide Amazon Elastic Compute Cloud (EC2) instances with programmatic access to the AWS API to enable downloading of pictures from Amazon Simple Storage Service.

What AWS feature allows you to do this in the most secure manner?

A. Launch an instance with an AWS Identity and Aceess Management (IAM) role to restrict AWS API access for the instance. B. Setup an IAM user for the instance to restrict access to AWS API and assign it at launch. C. Setup an IAM group with restricted AWS API access and put the instance in the group at launch. D. Pass access AWS credentials in the User Data field when the instance is launched.

A ** DONE EC2 can't take snapshot for instance-store CLOSED: [2015-04-15 Wed 12:33] http://serverfault.com/questions/377258/ec2-instance-store-cloning-or-to-ebs-via-gui-management-console What AWS refers to as 'snapshots' can only be made from EBS volumes (see image). ** DONE Why the old Reserved Instance model didn't work CLOSED: [2015-04-29 Wed 22:52] http://searchaws.techtarget.com/tip/Dissecting-AWS-EC2-Reserved-Instances-for-savings The previous AWS RI pricing model left something to be desired. The massive upfront costs drove users away. Even if customers liked the RI framework, its CAPEX-heavy purchasing model outweighed why many turned to AWS in the first place -- OPEX optimization.

Previous AWS RI pricing levels were based on usage patterns: Light, Medium and Heavy Utilization. Light Utilization RIs were geared toward two- to five-month periods of AWS use; Medium RIs worked for five- to 10-month usage timeframes; Heavy RIs were created for continuous usage. This concept gave AWS customers the ability to save 20% to 60%, but came with a challenge. To see the cost savings, IT teams needed to properly match user experience with buying behavior. And if they didn't, they paid more. For example, if a user purchased a Light Utilization RI and ran it for 11 months -- longer than its recommended usage -- it would become more expensive than a Medium or Heavy Utilization RI. ** DONE EC2 reboot: recommended that the user use the Amazon EC2 to reboot the instance CLOSED: [2015-05-02 Sat 16:25] It is recommended that the user use the Amazon EC2 to reboot the instance instead of running the operating system reboot command from the instance.

Rebooting an instance is equivalent to rebooting an operating system. ** DONE [#B] EC2 supports 2 types of block devices CLOSED: [2015-05-03 Sun 07:58] A block device is a storage device that moves data in sequences of bytes or bits (blocks). These devices support random access and generally use buffered I/O. Examples include hard disks, CD-ROM drives, and flash drives. A block device can be physically attached to a computer or accessed remotely as if it were physically attached to the computer. How many types of block devices does Amazon EC2 support?

8 2 16 32

B Amazon EC2 supports two types of block devices: Instance store volumes (virtual devices whose underlying hardware is physically attached to the host computer for the instance) Amazon EBS volumes (remote storage devices) http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html ** When you use an AWS EC2 instance for less than an hour, you will be charged for a full hour http://www.quora.com/When-you-use-an-AWS-EC2-instance-for-less-than-an-hour-are-you-charged-for-a-full-hour #+BEGIN_EXAMPLE In general, yes. Every time you start an instance you are immediately charged for one hour of running. Every time your instance goes over a 60 minute boundary (from when it started) you are also charged for one hour of running.

The hours are based on the time you started the instance, not time of day on a wall clock. An instance that runs from 2:45 to 3:15 is charged for a single hour of running.

If you stop an instance and then start the same instance again, you are charged for another hour upon starting it, even if you are still within the same 60 minute period as when it was stopped. Here is an article where I tested this behavior back in 2010: EBS Boot Instance Stop+Start Begins a New Hour of Charges on EC2 #+END_EXAMPLE ** DONE [#B] Feature: Detach volume from an Instance CLOSED: [2015-05-04 Mon 14:51] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-detaching-volume.html

  • If the instance that the volume is attached to is running, you must unmount the volume (from the instance) before you detach it.

  • If an EBS volume is the root device of an instance, you must stop the instance before you can detach the volume. *** Force Detach Forcing the detachment can lead to data loss or a corrupted file system. Use this option only as a last resort to detach a volume from a failed instance, or if you are detaching a volume with the intention of deleting it. The instance doesn't get an opportunity to flush file system caches or file system metadata. If you use this option, you must perform file system check and repair procedures.

If you've tried to force the volume to detach multiple times over several minutes and it stays in the detaching state, you can post a request for help to the Amazon EC2 forum. *** Question An EBS volume was unable to detach from an instance. Thus, the user used the Force Detach option. Which of the below mentioned options can happen after the volume has been Forcibly detached?

AWS deletes the volume automatically since it will be in a corrupted state The instance may not be able to flush the file system and may result in a corrupted file system of the volume The volume will be available but cannot be attached to any instance in the future AWS terminates the instance automatically since the file system is corrupted

B

If the EBS volume stays in the detaching state, the user can force the detachment by clicking Force Detach. Forcing the detachment can lead to either data loss or a corrupted file system. The user should use this option only as a last resort to detach a volume from a failed instance or if he is detaching a volume with the intention of deleting it. The instance does not get an opportunity to flush file system caches or file system metadata. If the user uses this option, he must perform a file system check and repair the procedures.

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-detaching-volume.html ** DONE [#C] M1 and M3 instance difference: the same ratio of CPU and memory, but M3 is better in most cases CLOSED: [2015-05-05 Tue 10:06] https://aws.amazon.com/ec2/faqs/ Q: M1 and M3 Standard instances have the same ratio of CPU and memory. When should I use one instance over the other?

M3 instances provide better, more consistent performance that M1 instances for most use-cases. M3 instances also offer SSD-based instance storage that delivers higher I/O performance. M3 instances are also less expensive than M1 instances. Due to these reasons, we recommend M3 for applications that require general purpose instances with a balance of compute, memory, and network resources. However, if you need more disk storage than what is provided in M3 instances, you may still find M1 instances useful for running your applications.

--8<-------------------------- separator ------------------------>8--

You have been using T2 instances as your CPU requirements have not been that intensive. However you now start to think about larger instance types and start lookig at M1 and M3 instances. You are a little confused as to the differences between them as they both seem to have the same ratio of CPU and memory. Which statement below is incorrect as to why you would use one over the other?

M3 instances are less expensive than M1 instances. M3 instances provide better, more consistent performance that M1 instances for most use-cases. M3 instances also offer SSD-based instance storage that delivers higher I/O performance. M3 instances are configured with more swap memory than M1 instances.

D

Amazon EC2 allows you to set up and configure everything about your instances from your operating system up to your applications. An Amazon Machine Image (AMI) is simply a packaged-up environment that includes all the necessary bits to set up and boot your instance. M1 and M3 Standard instances have the same ratio of CPU and memory, some reasons below as to why you would use one over the other. M3 instances provide better, more consistent performance that M1 instances for most use-cases. M3 instances also offer SSD-based instance storage that delivers higher I/O performance. M3 instances are also less expensive than M1 instances. Due to these reasons, we recommend M3 for applications that require general purpose instances with a balance of compute, memory, and network resources. However, if you need more disk storage than what is provided in M3 instances, you may still find M1 instances useful for running your applications. https://aws.amazon.com/ec2/faqs/ ** DONE feature: EC2 metadata and userdata CLOSED: [2015-05-06 Wed 09:56] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html

curl http://169.254.169.254/latest/meta-data/

User data is treated as opaque data. It's limited to 16 KB. ** DONE default security group behavior CLOSED: [2015-05-06 Wed 10:01] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#default-security-group http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html

  • Your changes are automatically applied to the instances associated with the security group after a short period. Default Security Group | Name | Summary | |----------+--------------------------------------------------------------------------------------------| | Inbound | Allow inbound traffic only from other instances associated with the default security group | | Outbound | Allow all outbound traffic from the instance |

--8<-------------------------- separator ------------------------>8--

Select the correct set of options. These are the initial settings for the default security group: A Allow no inbound traffic, Allow all outbound traffic and Allow instances associated with this security group to talk to each other--- B Allow all inbound traffic, Allow no outbound traffic and Allow instances associated with this security group to talk to each other C Allow no inbound traffic, Allow all outbound traffic and Does NOT allow instances associated with this security group to talk to each other D Allow all inbound traffic, Allow all outbound traffic and Does NOT allow instances associated with this security group to talk to each other

A ** [#B] An EBS volume can be attached to only one instance at a time within the same Availability Zone. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumes.html Can we attach an EBS volume to more than one EC2 instance at the same time? A No B Yes. C Only EC2-optimized EBS volumes. D Only in read mode.

No ** DONE You can't change the outbound rules for EC2-Classic. CLOSED: [2015-05-06 Wed 11:38] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html You can't change the outbound rules for EC2-Classic. Security group rules are always permissive; you can't create rules that deny access. ** DONE If I buy 3 year term of RI and now Amazon lower price, would my charge be matched to new price? No CLOSED: [2015-05-06 Wed 11:44] ** DONE If I restart VM multiple times within one hour, will I be charged for more than one hour? Yes CLOSED: [2015-05-06 Wed 11:44] http://www.quora.com/When-you-use-an-AWS-EC2-instance-for-less-than-an-hour-are-you-charged-for-a-full-hour ** DONE feature: ec2 instance store: storage physically attached to the hosting computer CLOSED: [2015-05-07 Thu 23:03] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Storage.html Many instances can access storage from disks that are physically attached to the host computer.

This disk storage is referred to as instance store. ** DONE [#B] Feature: Amazon Machine Images use one of two types of virtualization: HVM and PV CLOSED: [2015-05-08 Fri 14:27] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/virtualization_types.html

HVM is wining over PV now. | Virtualization Type | Summary | |--------------------------------+---------| | paravirtual (PV) | | | hardware virtual machine (HVM) | |

The main difference between PV and HVM AMIs is the way in which they boot and whether they can take advantage of special hardware extensions (CPU, network, and storage) for better performance.

  • Paravirtual guests can run on host hardware that does not have explicit support for virtualization, but they cannot take advantage of special hardware extensions such as enhanced networking or GPU processing.

  • Unlike PV guests, HVM guests can take advantage of hardware extensions that provide fast access to the underlying hardware on the host system.

  • All current generation instance types support HVM AMIs.

  • For the best performance, we recommend that you use current generation instance types and HVM AMIs when you launch new instances.

  • Historically, PV guests had better performance than HVM guests in many cases, but because of enhancements in HVM virtualization and the availability of PV drivers for HVM AMIs, this is no longer true.

  • Paravirtual guests traditionally performed better with storage and network operations than HVM guests because they could leverage special drivers for I/O that avoided the overhead of emulating network and disk hardware, whereas HVM guests had to translate these instructions to emulated hardware. ** DONE [#B] Feature: expand disk without lossing data and minimum downtime CLOSED: [2015-05-08 Fri 15:34] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-expand-volume.html

  • Shutdown without terminate

  • Create a snapshot of the volume to expand.

  • Create a new volume from the snapshot.

  • Detach the old volume.

  • Attach the newly expanded volume ** DONE Feature: Procedure to create AMI of instance stored-backend CLOSED: [2015-05-09 Sat 09:59] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-an-ami-instance-store.html http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/images/ami_create_instance_store.png | Steps | Summary | |---------------------------+-------------------------------------------------------| | bundle the volume | image.manifest.xml, plus multiple image.part.xx files | | Upload the bundled volume | Upload to S3 bucket | | register a new AMI | |

A user has launched an EC2 instance from an instance store backed AMI. The infrastructure team wants to create an AMI from the running instance. Which of the below mentioned steps will not be performed while creating the AMI?

Upload the bundled volume Bundle the volume Define the AMI launch permissions Register the AMI

C When the user has launched an EC2 instance from an instance store backed AMI, it will need to follow certain steps, such as "Bundling the root volume", "Uploading the bundled volume" and "Register the AMI". Once the AMI is created the user can setup the launch permission. However, it is not required to setup during the launch. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-an-ami-instance-store.html *** TODO How "Bundle the volume" is done? *** Converting your Instance Store-Backed AMI to an Amazon EBS-Backed AMI http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-an-ami-instance-store.html

ec2-download-bundle -b my-s3-bucket/bundle_folder/bundle_name -m image.manifest.xml -a $AWS_ACCESS_KEY -s $AWS_SECRET_KEY --privatekey /path/to/pk-HKZYKTAIG2ECMXYIBH3HXV4ZBEXAMPLE.pem -d /tmp/bundle

ec2-unbundle -m image.manifest.xml --privatekey /path/to/pk-HKZYKTAIG2ECMXYIBH3HXV4ZBEXAMPLE.pem

sudo dd if=/tmp/bundle/image of=/dev/sdb bs=1M ** DONE Create Instance stored-backend AMI question CLOSED: [2015-05-09 Sat 11:23] A user has launched an EC2 instance from an instance store backed AMI. The infrastructure team wants to create an AMI from the running instance. Which of the below mentioned credentials is not required while creating the AMI?

X.509 certificate and private key Access key and secret access key AWS login ID to login to the console AWS account ID

B When the user has launched an EC2 instance from an instance store backed AMI and the admin team wants to create an AMI from it, the user needs to setup the AWS AMI or the API tools first. Once the tool is setup the user will need the following credentials: AWS account ID; AWS access and secret access key; X.509 certificate with private key. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-an-ami-instance-store.html ** DONE security group change apply to all instance immediately or after several minutes CLOSED: [2015-05-09 Sat 12:01] http://serverfault.com/questions/205094/are-ec2-security-group-changes-effective-immediately-for-running-instances You can modify rules for a group at any time. The new rules are automatically enforced for all running instances and instances launched in the future. ** DONE HTTP ping speed test for different AWS region: http://cloudping.info/ CLOSED: [2015-03-13 Fri 21:35] ** DONE [#A] When I click: configure system, jenkins is not responding: out of memory for the container CLOSED: [2017-11-05 Sun 14:43]

#+BEGIN_EXAMPLE 2017-11-10T21:49:01Z [INFO] Redundant container state change for task task-denny-proxy:8 arn:aws:ecs:us-east-1:938874974988:task/ed3f3ee7-2d21-4b46-94c8-ba2dd433d42e, TaskStatus: (RUNNING->RUNNING) Containers: [nginx-proxy (RUNNING->RUNNING),]: nginx-proxy(denny/devops-blog:nginx-proxy) (RUNNING->RUNNING) to RUNNING, but already RUNNING 2017-11-10T21:49:11Z [INFO] ACS Websocket connection closed for a valid reason 2017-11-10T21:49:12Z [INFO] Connected to ACS endpoint 2017-11-10T21:49:23Z [INFO] TaskHandler: Sending task change: TaskChange: [arn:aws:ecs:us-east-1:938874974988:task/13eb60b6-3734-4b34-b80d-f56dffb08334 -> STOPPED, Known Sent: STOPPED] sent: false 2017-11-10T21:49:23Z [WARN] Could not submit task state change: [arn:aws:ecs:us-east-1:938874974988:task/13eb60b6-3734-4b34-b80d-f56dffb08334 -> STOPPED, Known Sent: STOPPED]: ClientException: The referenced task was not found. status code: 400, request id: 03de58fc-c661-11e7-b081-5f4db4726fde 2017-11-10T21:49:23Z [ERROR] TaskHandler: Unretriable error submitting task state change [TaskChange: [arn:aws:ecs:us-east-1:938874974988:task/13eb60b6-3734-4b34-b80d-f56dffb08334 -> STOPPED, Known Sent: STOPPED] sent: false]: ClientException: The referenced task was not found. status code: 400, request id: 03de58fc-c661-11e7-b081-5f4db4726fde #+END_EXAMPLE ** TODO [#A] ECS: jenkins container Status reason CannotStartContainerError: API error (500): driver failed programming external connectivity on endpoint ecs-task-denny-jenkins-11-jenkins-aio-b8f4c8aa979fd388b401 (98bd4e872e37ebd5e68b3744eebb08457bfc69b98e7253d5f394cf55afa2f710): Bind for 0.0.0.0:18080 f Host name jenkins ** TODO Use ECS to start a jenkins container bash /usr/local/bin/jenkins.sh

http://localhost:18000 *** DONE build jenkins docker image has failed CLOSED: [2017-11-04 Sat 16:50] *** DONE auto initialize jenkins service CLOSED: [2017-11-04 Sat 16:50] -Djenkins.install.runSetupWizard=false *** TODO jenkins volume for jenkins home direction *** # --8<-------------------------- separator ------------------------>8-- :noexport: *** TODO enable jenkins security model *** TODO add sample jenkins users by groovy script *** DONE jenkins healthcheck doesn't work CLOSED: [2017-11-04 Sat 19:30] *** DONE jenkins run groovy script in command line CLOSED: [2017-11-04 Sat 19:30] https://stackoverflow.com/questions/35778524/parsing-the-command-line-in-jenkins-cli-groovy-scripts #+BEGIN_EXAMPLE import jenkins.model.* import groovy.util.CliBuilder println("CliBuilder imported; calling constructor...") def cli = new CliBuilder(usage: 'myscript.groovy [-halr] [name]') results in

$ java -jar "jenkins-cli.jar" -s https://myjenkins1/ groovy myscript.groovy CliBuilder imported; calling constructor... #+END_EXAMPLE ** TODO [#A] ECS concept: task, cluster, service, container, EC2 ** TODO [#A] How does ECS work with multiple instances ** TODO [#A] ECS add disk volume ** TODO [#A] ECS: Bring my own image: ECS-optimized AMI sudo yum install -y tmux ** TODO ECS network mode: bridge, host ** TODO AWS ECS create 2 containers for blue/green deployment ** TODO [#A] ECS use s3 bitbucket: Use git repo to host the images/css/js, instead of wordpress git repo

  • [#A] Amazon CloudFormation: Templated AWS Resource Creation :noexport: | Name | Summary | |---------------------------+---------| | helper: cfn-init | | |---------------------------+---------| | helper: cfn-signal | | | helper: cfn-get-metadata | | | helper: cfn-hup | | |---------------------------+---------| | /var/log/cfn-init.log | | | /var/log/cfn-init-cmd.log | |
  • AWSTemplateFormatVersion: Specifies the AWS CloudFormation template version.

  • Description: A text string that describes the template.

  • Mappings: A mapping of keys and associated values that you can use to specify conditional parameter values. This is CloudFormation's version of a "case" statement.

  • Outputs: Describes the values that are returned whenever you view your stack's properties. This gets displayed in the AWS CloudFormation Console.

  • Parameters: Specifies values that you can pass in to your template at runtime.

  • Resources: Specifies the stack resources and their properties, like our EC2 instance. This is the only required property.

  • A stack is a collection of AWS resources that you can manage as a single unit. If delete the stack, and all of its related resources are deleted.

  • You are charged for the stack resources for the time they were operating (even if you deleted the stack right away).

  • There is no additional charge for AWS CloudFormation itself.

GitHub Example: https://github.com/awslabs/startup-kit-templates ** basic use http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/deploying.applications.html

You can use AWS CloudFormation to automatically install, configure, and start applications on Amazon EC2 instances. Doing so enables you to easily duplicate deployments and update existing installations without connecting directly to the instance, which can save you a lot of time and effort.

AWS CloudFormation includes a set of helper scripts (cfn-init, cfn-signal, cfn-get-metadata, and cfn-hup) that are based on cloud-init. You call these helper scripts from your AWS CloudFormation templates to install, configure, and update applications on Amazon EC2 instances that are in the same template. ** DONE [#A] Use cloud formation start EC2 VM CLOSED: [2017-11-13 Mon 10:04] http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/deploying.applications.html

https://medium.com/boltops/a-simple-introduction-to-aws-cloudformation-part-1-1694a41ae59d *** Concept: Parameters *** Concept: Resources Metadata: AWS::CloudFormation::Init *** single-node wordpress { "AWSTemplateFormatVersion" : "2010-09-09",

"Description" : "AWS CloudFormation Sample Template WordPress_Single_Instance: WordPress is web software you can use to create a beautiful website or blog. This template installs WordPress with a local MySQL database for storage. It demonstrates using the AWS CloudFormation bootstrap scripts to deploy WordPress. WARNING This template creates an Amazon EC2 instance. You will be billed for the AWS resources used if you create a stack from this template.",

"Parameters" : {

"KeyName": {
  "Description" : "Name of an existing EC2 KeyPair to enable SSH access to the instances",
  "Type": "AWS::EC2::KeyPair::KeyName",
  "ConstraintDescription" : "must be the name of an existing EC2 KeyPair."
},

"InstanceType" : {
  "Description" : "WebServer EC2 instance type",
  "Type" : "String",
  "Default" : "t2.small",
  "AllowedValues" : [ "t1.micro", "t2.nano", "t2.micro", "t2.small", "t2.medium", "t2.large", "m1.small", "m1.medium", "m1.large", "m1.xlarge", "m2.xlarge", "m2.2xlarge", "m2.4xlarge", "m3.medium", "m3.large", "m3.xlarge", "m3.2xlarge", "m4.large", "m4.xlarge", "m4.2xlarge", "m4.4xlarge", "m4.10xlarge", "c1.medium", "c1.xlarge", "c3.large", "c3.xlarge", "c3.2xlarge", "c3.4xlarge", "c3.8xlarge", "c4.large", "c4.xlarge", "c4.2xlarge", "c4.4xlarge", "c4.8xlarge", "g2.2xlarge", "g2.8xlarge", "r3.large", "r3.xlarge", "r3.2xlarge", "r3.4xlarge", "r3.8xlarge", "i2.xlarge", "i2.2xlarge", "i2.4xlarge", "i2.8xlarge", "d2.xlarge", "d2.2xlarge", "d2.4xlarge", "d2.8xlarge", "hi1.4xlarge", "hs1.8xlarge", "cr1.8xlarge", "cc2.8xlarge", "cg1.4xlarge"]

, "ConstraintDescription" : "must be a valid EC2 instance type." },

"SSHLocation": {
  "Description": "The IP address range that can be used to SSH to the EC2 instances",
  "Type": "String",
  "MinLength": "9",
  "MaxLength": "18",
  "Default": "0.0.0.0/0",
  "AllowedPattern": "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})",
  "ConstraintDescription": "must be a valid IP CIDR range of the form x.x.x.x/x."
},

"DBName" : {
  "Default": "wordpressdb",
  "Description" : "The WordPress database name",
  "Type": "String",
  "MinLength": "1",
  "MaxLength": "64",
  "AllowedPattern" : "[a-zA-Z][a-zA-Z0-9]*",
  "ConstraintDescription" : "must begin with a letter and contain only alphanumeric characters."
},

"DBUser" : {
  "NoEcho": "true",
  "Description" : "The WordPress database admin account username",
  "Type": "String",
  "MinLength": "1",
  "MaxLength": "16",
  "AllowedPattern" : "[a-zA-Z][a-zA-Z0-9]*",
  "ConstraintDescription" : "must begin with a letter and contain only alphanumeric characters."
},

"DBPassword" : {
  "NoEcho": "true",
  "Description" : "The WordPress database admin account password",
  "Type": "String",
  "MinLength": "8",
  "MaxLength": "41",
  "AllowedPattern" : "[a-zA-Z0-9]*",
  "ConstraintDescription" : "must contain only alphanumeric characters."
},

"DBRootPassword" : {
  "NoEcho": "true",
  "Description" : "MySQL root password",
  "Type": "String",
  "MinLength": "8",
  "MaxLength": "41",
  "AllowedPattern" : "[a-zA-Z0-9]*",
  "ConstraintDescription" : "must contain only alphanumeric characters."
}

},

"Mappings" : { "AWSInstanceType2Arch" : { "t1.micro" : { "Arch" : "PV64" }, "t2.nano" : { "Arch" : "HVM64" }, "t2.micro" : { "Arch" : "HVM64" }, "t2.small" : { "Arch" : "HVM64" }, "t2.medium" : { "Arch" : "HVM64" }, "t2.large" : { "Arch" : "HVM64" }, "m1.small" : { "Arch" : "PV64" }, "m1.medium" : { "Arch" : "PV64" }, "m1.large" : { "Arch" : "PV64" }, "m1.xlarge" : { "Arch" : "PV64" }, "m2.xlarge" : { "Arch" : "PV64" }, "m2.2xlarge" : { "Arch" : "PV64" }, "m2.4xlarge" : { "Arch" : "PV64" }, "m3.medium" : { "Arch" : "HVM64" }, "m3.large" : { "Arch" : "HVM64" }, "m3.xlarge" : { "Arch" : "HVM64" }, "m3.2xlarge" : { "Arch" : "HVM64" }, "m4.large" : { "Arch" : "HVM64" }, "m4.xlarge" : { "Arch" : "HVM64" }, "m4.2xlarge" : { "Arch" : "HVM64" }, "m4.4xlarge" : { "Arch" : "HVM64" }, "m4.10xlarge" : { "Arch" : "HVM64" }, "c1.medium" : { "Arch" : "PV64" }, "c1.xlarge" : { "Arch" : "PV64" }, "c3.large" : { "Arch" : "HVM64" }, "c3.xlarge" : { "Arch" : "HVM64" }, "c3.2xlarge" : { "Arch" : "HVM64" }, "c3.4xlarge" : { "Arch" : "HVM64" }, "c3.8xlarge" : { "Arch" : "HVM64" }, "c4.large" : { "Arch" : "HVM64" }, "c4.xlarge" : { "Arch" : "HVM64" }, "c4.2xlarge" : { "Arch" : "HVM64" }, "c4.4xlarge" : { "Arch" : "HVM64" }, "c4.8xlarge" : { "Arch" : "HVM64" }, "g2.2xlarge" : { "Arch" : "HVMG2" }, "g2.8xlarge" : { "Arch" : "HVMG2" }, "r3.large" : { "Arch" : "HVM64" }, "r3.xlarge" : { "Arch" : "HVM64" }, "r3.2xlarge" : { "Arch" : "HVM64" }, "r3.4xlarge" : { "Arch" : "HVM64" }, "r3.8xlarge" : { "Arch" : "HVM64" }, "i2.xlarge" : { "Arch" : "HVM64" }, "i2.2xlarge" : { "Arch" : "HVM64" }, "i2.4xlarge" : { "Arch" : "HVM64" }, "i2.8xlarge" : { "Arch" : "HVM64" }, "d2.xlarge" : { "Arch" : "HVM64" }, "d2.2xlarge" : { "Arch" : "HVM64" }, "d2.4xlarge" : { "Arch" : "HVM64" }, "d2.8xlarge" : { "Arch" : "HVM64" }, "hi1.4xlarge" : { "Arch" : "HVM64" }, "hs1.8xlarge" : { "Arch" : "HVM64" }, "cr1.8xlarge" : { "Arch" : "HVM64" }, "cc2.8xlarge" : { "Arch" : "HVM64" } },

"AWSInstanceType2NATArch" : {
  "t1.micro"    : { "Arch" : "NATPV64"   },
  "t2.nano"     : { "Arch" : "NATHVM64"  },
  "t2.micro"    : { "Arch" : "NATHVM64"  },
  "t2.small"    : { "Arch" : "NATHVM64"  },
  "t2.medium"   : { "Arch" : "NATHVM64"  },
  "t2.large"    : { "Arch" : "NATHVM64"  },
  "m1.small"    : { "Arch" : "NATPV64"   },
  "m1.medium"   : { "Arch" : "NATPV64"   },
  "m1.large"    : { "Arch" : "NATPV64"   },
  "m1.xlarge"   : { "Arch" : "NATPV64"   },
  "m2.xlarge"   : { "Arch" : "NATPV64"   },
  "m2.2xlarge"  : { "Arch" : "NATPV64"   },
  "m2.4xlarge"  : { "Arch" : "NATPV64"   },
  "m3.medium"   : { "Arch" : "NATHVM64"  },
  "m3.large"    : { "Arch" : "NATHVM64"  },
  "m3.xlarge"   : { "Arch" : "NATHVM64"  },
  "m3.2xlarge"  : { "Arch" : "NATHVM64"  },
  "m4.large"    : { "Arch" : "NATHVM64"  },
  "m4.xlarge"   : { "Arch" : "NATHVM64"  },
  "m4.2xlarge"  : { "Arch" : "NATHVM64"  },
  "m4.4xlarge"  : { "Arch" : "NATHVM64"  },
  "m4.10xlarge" : { "Arch" : "NATHVM64"  },
  "c1.medium"   : { "Arch" : "NATPV64"   },
  "c1.xlarge"   : { "Arch" : "NATPV64"   },
  "c3.large"    : { "Arch" : "NATHVM64"  },
  "c3.xlarge"   : { "Arch" : "NATHVM64"  },
  "c3.2xlarge"  : { "Arch" : "NATHVM64"  },
  "c3.4xlarge"  : { "Arch" : "NATHVM64"  },
  "c3.8xlarge"  : { "Arch" : "NATHVM64"  },
  "c4.large"    : { "Arch" : "NATHVM64"  },
  "c4.xlarge"   : { "Arch" : "NATHVM64"  },
  "c4.2xlarge"  : { "Arch" : "NATHVM64"  },
  "c4.4xlarge"  : { "Arch" : "NATHVM64"  },
  "c4.8xlarge"  : { "Arch" : "NATHVM64"  },
  "g2.2xlarge"  : { "Arch" : "NATHVMG2"  },
  "g2.8xlarge"  : { "Arch" : "NATHVMG2"  },
  "r3.large"    : { "Arch" : "NATHVM64"  },
  "r3.xlarge"   : { "Arch" : "NATHVM64"  },
  "r3.2xlarge"  : { "Arch" : "NATHVM64"  },
  "r3.4xlarge"  : { "Arch" : "NATHVM64"  },
  "r3.8xlarge"  : { "Arch" : "NATHVM64"  },
  "i2.xlarge"   : { "Arch" : "NATHVM64"  },
  "i2.2xlarge"  : { "Arch" : "NATHVM64"  },
  "i2.4xlarge"  : { "Arch" : "NATHVM64"  },
  "i2.8xlarge"  : { "Arch" : "NATHVM64"  },
  "d2.xlarge"   : { "Arch" : "NATHVM64"  },
  "d2.2xlarge"  : { "Arch" : "NATHVM64"  },
  "d2.4xlarge"  : { "Arch" : "NATHVM64"  },
  "d2.8xlarge"  : { "Arch" : "NATHVM64"  },
  "hi1.4xlarge" : { "Arch" : "NATHVM64"  },
  "hs1.8xlarge" : { "Arch" : "NATHVM64"  },
  "cr1.8xlarge" : { "Arch" : "NATHVM64"  },
  "cc2.8xlarge" : { "Arch" : "NATHVM64"  }
}

, "AWSRegionArch2AMI" : { "us-east-1" : {"PV64" : "ami-2a69aa47", "HVM64" : "ami-6869aa05", "HVMG2" : "ami-1f12e965"}, "us-west-2" : {"PV64" : "ami-7f77b31f", "HVM64" : "ami-7172b611", "HVMG2" : "ami-5c9b6124"}, "us-west-1" : {"PV64" : "ami-a2490dc2", "HVM64" : "ami-31490d51", "HVMG2" : "ami-7291a112"}, "eu-west-1" : {"PV64" : "ami-4cdd453f", "HVM64" : "ami-f9dd458a", "HVMG2" : "ami-b411c5cd"}, "eu-west-2" : {"PV64" : "NOT_SUPPORTED", "HVM64" : "ami-886369ec", "HVMG2" : "NOT_SUPPORTED"}, "eu-central-1" : {"PV64" : "ami-6527cf0a", "HVM64" : "ami-ea26ce85", "HVMG2" : "ami-be40f2d1"}, "ap-northeast-1" : {"PV64" : "ami-3e42b65f", "HVM64" : "ami-374db956", "HVMG2" : "ami-3efd2c58"}, "ap-northeast-2" : {"PV64" : "NOT_SUPPORTED", "HVM64" : "ami-2b408b45", "HVMG2" : "NOT_SUPPORTED"}, "ap-southeast-1" : {"PV64" : "ami-df9e4cbc", "HVM64" : "ami-a59b49c6", "HVMG2" : "ami-3e91ed5d"}, "ap-southeast-2" : {"PV64" : "ami-63351d00", "HVM64" : "ami-dc361ebf", "HVMG2" : "ami-84a142e6"}, "ap-south-1" : {"PV64" : "NOT_SUPPORTED", "HVM64" : "ami-ffbdd790", "HVMG2" : "ami-25ffbe4a"}, "us-east-2" : {"PV64" : "NOT_SUPPORTED", "HVM64" : "ami-f6035893", "HVMG2" : "NOT_SUPPORTED"}, "ca-central-1" : {"PV64" : "NOT_SUPPORTED", "HVM64" : "ami-730ebd17", "HVMG2" : "NOT_SUPPORTED"}, "sa-east-1" : {"PV64" : "ami-1ad34676", "HVM64" : "ami-6dd04501", "HVMG2" : "NOT_SUPPORTED"}, "cn-north-1" : {"PV64" : "ami-77559f1a", "HVM64" : "ami-8e6aa0e3", "HVMG2" : "NOT_SUPPORTED"}, "cn-northwest-1" : {"PV64" : "ami-80707be2", "HVM64" : "ami-cb858fa9", "HVMG2" : "NOT_SUPPORTED"} }

},

"Resources" : { "WebServerSecurityGroup" : { "Type" : "AWS::EC2::SecurityGroup", "Properties" : { "GroupDescription" : "Enable HTTP access via port 80 locked down to the load balancer + SSH access", "SecurityGroupIngress" : [ {"IpProtocol" : "tcp", "FromPort" : "80", "ToPort" : "80", "CidrIp" : "0.0.0.0/0"}, {"IpProtocol" : "tcp", "FromPort" : "22", "ToPort" : "22", "CidrIp" : { "Ref" : "SSHLocation"}} ] } },

"WebServer": {
  "Type" : "AWS::EC2::Instance",
  "Metadata" : {
    "AWS::CloudFormation::Init" : {
      "configSets" : {
        "wordpress_install" : ["install_cfn", "install_wordpress", "configure_wordpress" ]
      },
      "install_cfn" : {
        "files": {
          "/etc/cfn/cfn-hup.conf": {
            "content": { "Fn::Join": [ "", [
              "[main]\n",
              "stack=", { "Ref": "AWS::StackId" }, "\n",
              "region=", { "Ref": "AWS::Region" }, "\n"
            ]]},
            "mode"  : "000400",
            "owner" : "root",
            "group" : "root"
          },
          "/etc/cfn/hooks.d/cfn-auto-reloader.conf": {
            "content": { "Fn::Join": [ "", [
              "[cfn-auto-reloader-hook]\n",
              "triggers=post.update\n",
              "path=Resources.WebServer.Metadata.AWS::CloudFormation::Init\n",
              "action=/opt/aws/bin/cfn-init -v ",
                      "         --stack ", { "Ref" : "AWS::StackName" },
                      "         --resource WebServer ",
                      "         --configsets wordpress_install ",
                      "         --region ", { "Ref" : "AWS::Region" }, "\n"
            ]]},
            "mode"  : "000400",
            "owner" : "root",
            "group" : "root"
          }
        },
        "services" : {
          "sysvinit" : {
            "cfn-hup" : { "enabled" : "true", "ensureRunning" : "true",
                          "files" : ["/etc/cfn/cfn-hup.conf", "/etc/cfn/hooks.d/cfn-auto-reloader.conf"] }
          }
        }
      },

      "install_wordpress" : {
        "packages" : {
          "yum" : {
            "php"          : [],
            "php-mysql"    : [],
            "mysql"        : [],
            "mysql-server" : [],
            "mysql-devel"  : [],
            "mysql-libs"   : [],
            "httpd"        : []
          }
        },
        "sources" : {
          "/var/www/html" : "http://wordpress.org/latest.tar.gz"
        },
        "files" : {
          "/tmp/setup.mysql" : {
            "content" : { "Fn::Join" : ["", [
              "CREATE DATABASE ", { "Ref" : "DBName" }, ";\n",
              "CREATE USER '", { "Ref" : "DBUser" }, "'@'localhost' IDENTIFIED BY '", { "Ref" : "DBPassword" }, "';\n",
              "GRANT ALL ON ", { "Ref" : "DBName" }, ".* TO '", { "Ref" : "DBUser" }, "'@'localhost';\n",
              "FLUSH PRIVILEGES;\n"
            ]]},
            "mode"  : "000400",
            "owner" : "root",
            "group" : "root"
          },

          "/tmp/create-wp-config" : {
            "content" : { "Fn::Join" : [ "", [
              "#!/bin/bash -xe\n",
              "cp /var/www/html/wordpress/wp-config-sample.php /var/www/html/wordpress/wp-config.php\n",
              "sed -i \"s/'database_name_here'/'",{ "Ref" : "DBName" }, "'/g\" wp-config.php\n",
              "sed -i \"s/'username_here'/'",{ "Ref" : "DBUser" }, "'/g\" wp-config.php\n",
              "sed -i \"s/'password_here'/'",{ "Ref" : "DBPassword" }, "'/g\" wp-config.php\n"
            ]]},
            "mode" : "000500",
            "owner" : "root",
            "group" : "root"
          }
        },
        "services" : {
          "sysvinit" : {
            "httpd"  : { "enabled" : "true", "ensureRunning" : "true" },
            "mysqld" : { "enabled" : "true", "ensureRunning" : "true" }
          }
        }
      },

      "configure_wordpress" : {
        "commands" : {
          "01_set_mysql_root_password" : {
            "command" : { "Fn::Join" : ["", ["mysqladmin -u root password '", { "Ref" : "DBRootPassword" }, "'"]]},
            "test" : { "Fn::Join" : ["", ["$(mysql ", { "Ref" : "DBName" }, " -u root --password='", { "Ref" : "DBRootPassword" }, "' >/dev/null 2>&1 </dev/null); (( $? != 0 ))"]]}
          },
          "02_create_database" : {
            "command" : { "Fn::Join" : ["", ["mysql -u root --password='", { "Ref" : "DBRootPassword" }, "' < /tmp/setup.mysql"]]},
            "test" : { "Fn::Join" : ["", ["$(mysql ", { "Ref" : "DBName" }, " -u root --password='", { "Ref" : "DBRootPassword" }, "' >/dev/null 2>&1 </dev/null); (( $? != 0 ))"]]}
          },
          "03_configure_wordpress" : {
            "command" : "/tmp/create-wp-config",
            "cwd" : "/var/www/html/wordpress"
          }
        }
      }
    }
  },
  "Properties": {
    "ImageId" : { "Fn::FindInMap" : [ "AWSRegionArch2AMI", { "Ref" : "AWS::Region" },
                      { "Fn::FindInMap" : [ "AWSInstanceType2Arch", { "Ref" : "InstanceType" }, "Arch" ] } ] },
    "InstanceType"   : { "Ref" : "InstanceType" },
    "SecurityGroups" : [ {"Ref" : "WebServerSecurityGroup"} ],
    "KeyName"        : { "Ref" : "KeyName" },
    "UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
                   "#!/bin/bash -xe\n",
                   "yum update -y aws-cfn-bootstrap\n",

                   "/opt/aws/bin/cfn-init -v ",
                   "         --stack ", { "Ref" : "AWS::StackName" },
                   "         --resource WebServer ",
                   "         --configsets wordpress_install ",
                   "         --region ", { "Ref" : "AWS::Region" }, "\n",

                   "/opt/aws/bin/cfn-signal -e $? ",
                   "         --stack ", { "Ref" : "AWS::StackName" },
                   "         --resource WebServer ",
                   "         --region ", { "Ref" : "AWS::Region" }, "\n"
    ]]}}
  },
  "CreationPolicy" : {
    "ResourceSignal" : {
      "Timeout" : "PT15M"
    }
  }
}

},

"Outputs" : { "WebsiteURL" : { "Value" : { "Fn::Join" : ["", ["http://", { "Fn::GetAtt" : [ "WebServer", "PublicDnsName" ]}, "/wordpress" ]]}, "Description" : "WordPress Website" } } } ** DONE cloudformation change json to yml: aws-cfn-template-flip CLOSED: [2017-11-13 Mon 11:27] https://github.com/awslabs/aws-cfn-template-flip

cfn-flip examples/test.json output.yaml ** DONE CF example: aio wordpress CLOSED: [2017-11-13 Mon 09:52] #+BEGIN_EXAMPLE { "AWSTemplateFormatVersion" : "2010-09-09",

"Description" : "AWS CloudFormation Sample Template WordPress_Single_Instance: WordPress is web software you can use to create a beautiful website or blog. This template installs WordPress with a local MySQL database for storage. It demonstrates using the AWS CloudFormation bootstrap scripts to deploy WordPress. WARNING This template creates an Amazon EC2 instance. You will be billed for the AWS resources used if you create a stack from this template.",

"Parameters" : {

"KeyName": {
  "Description" : "Name of an existing EC2 KeyPair to enable SSH access to the instances",
  "Type": "AWS::EC2::KeyPair::KeyName",
  "ConstraintDescription" : "must be the name of an existing EC2 KeyPair."
},

"InstanceType" : {
  "Description" : "WebServer EC2 instance type",
  "Type" : "String",
  "Default" : "t2.small",
  "AllowedValues" : [ "t1.micro", "t2.nano", "t2.micro", "t2.small", "t2.medium", "t2.large", "m1.small", "m1.medium", "m1.large", "m1.xlarge", "m2.xlarge", "m2.2xlarge", "m2.4xlarge", "m3.medium", "m3.large", "m3.xlarge", "m3.2xlarge", "m4.large", "m4.xlarge", "m4.2xlarge", "m4.4xlarge", "m4.10xlarge", "c1.medium", "c1.xlarge", "c3.large", "c3.xlarge", "c3.2xlarge", "c3.4xlarge", "c3.8xlarge", "c4.large", "c4.xlarge", "c4.2xlarge", "c4.4xlarge", "c4.8xlarge", "g2.2xlarge", "g2.8xlarge", "r3.large", "r3.xlarge", "r3.2xlarge", "r3.4xlarge", "r3.8xlarge", "i2.xlarge", "i2.2xlarge", "i2.4xlarge", "i2.8xlarge", "d2.xlarge", "d2.2xlarge", "d2.4xlarge", "d2.8xlarge", "hi1.4xlarge", "hs1.8xlarge", "cr1.8xlarge", "cc2.8xlarge", "cg1.4xlarge"]

, "ConstraintDescription" : "must be a valid EC2 instance type." },

"SSHLocation": {
  "Description": "The IP address range that can be used to SSH to the EC2 instances",
  "Type": "String",
  "MinLength": "9",
  "MaxLength": "18",
  "Default": "0.0.0.0/0",
  "AllowedPattern": "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})",
  "ConstraintDescription": "must be a valid IP CIDR range of the form x.x.x.x/x."
},

"DBName" : {
  "Default": "wordpressdb",
  "Description" : "The WordPress database name",
  "Type": "String",
  "MinLength": "1",
  "MaxLength": "64",
  "AllowedPattern" : "[a-zA-Z][a-zA-Z0-9]*",
  "ConstraintDescription" : "must begin with a letter and contain only alphanumeric characters."
},

"DBUser" : {
  "NoEcho": "true",
  "Description" : "The WordPress database admin account username",
  "Type": "String",
  "MinLength": "1",
  "MaxLength": "16",
  "AllowedPattern" : "[a-zA-Z][a-zA-Z0-9]*",
  "ConstraintDescription" : "must begin with a letter and contain only alphanumeric characters."
},

"DBPassword" : {
  "NoEcho": "true",
  "Description" : "The WordPress database admin account password",
  "Type": "String",
  "MinLength": "8",
  "MaxLength": "41",
  "AllowedPattern" : "[a-zA-Z0-9]*",
  "ConstraintDescription" : "must contain only alphanumeric characters."
},

"DBRootPassword" : {
  "NoEcho": "true",
  "Description" : "MySQL root password",
  "Type": "String",
  "MinLength": "8",
  "MaxLength": "41",
  "AllowedPattern" : "[a-zA-Z0-9]*",
  "ConstraintDescription" : "must contain only alphanumeric characters."
}

},

"Mappings" : { "AWSInstanceType2Arch" : { "t1.micro" : { "Arch" : "PV64" }, "t2.nano" : { "Arch" : "HVM64" }, "t2.micro" : { "Arch" : "HVM64" }, "t2.small" : { "Arch" : "HVM64" }, "t2.medium" : { "Arch" : "HVM64" }, "t2.large" : { "Arch" : "HVM64" }, "m1.small" : { "Arch" : "PV64" }, "m1.medium" : { "Arch" : "PV64" }, "m1.large" : { "Arch" : "PV64" }, "m1.xlarge" : { "Arch" : "PV64" }, "m2.xlarge" : { "Arch" : "PV64" }, "m2.2xlarge" : { "Arch" : "PV64" }, "m2.4xlarge" : { "Arch" : "PV64" }, "m3.medium" : { "Arch" : "HVM64" }, "m3.large" : { "Arch" : "HVM64" }, "m3.xlarge" : { "Arch" : "HVM64" }, "m3.2xlarge" : { "Arch" : "HVM64" }, "m4.large" : { "Arch" : "HVM64" }, "m4.xlarge" : { "Arch" : "HVM64" }, "m4.2xlarge" : { "Arch" : "HVM64" }, "m4.4xlarge" : { "Arch" : "HVM64" }, "m4.10xlarge" : { "Arch" : "HVM64" }, "c1.medium" : { "Arch" : "PV64" }, "c1.xlarge" : { "Arch" : "PV64" }, "c3.large" : { "Arch" : "HVM64" }, "c3.xlarge" : { "Arch" : "HVM64" }, "c3.2xlarge" : { "Arch" : "HVM64" }, "c3.4xlarge" : { "Arch" : "HVM64" }, "c3.8xlarge" : { "Arch" : "HVM64" }, "c4.large" : { "Arch" : "HVM64" }, "c4.xlarge" : { "Arch" : "HVM64" }, "c4.2xlarge" : { "Arch" : "HVM64" }, "c4.4xlarge" : { "Arch" : "HVM64" }, "c4.8xlarge" : { "Arch" : "HVM64" }, "g2.2xlarge" : { "Arch" : "HVMG2" }, "g2.8xlarge" : { "Arch" : "HVMG2" }, "r3.large" : { "Arch" : "HVM64" }, "r3.xlarge" : { "Arch" : "HVM64" }, "r3.2xlarge" : { "Arch" : "HVM64" }, "r3.4xlarge" : { "Arch" : "HVM64" }, "r3.8xlarge" : { "Arch" : "HVM64" }, "i2.xlarge" : { "Arch" : "HVM64" }, "i2.2xlarge" : { "Arch" : "HVM64" }, "i2.4xlarge" : { "Arch" : "HVM64" }, "i2.8xlarge" : { "Arch" : "HVM64" }, "d2.xlarge" : { "Arch" : "HVM64" }, "d2.2xlarge" : { "Arch" : "HVM64" }, "d2.4xlarge" : { "Arch" : "HVM64" }, "d2.8xlarge" : { "Arch" : "HVM64" }, "hi1.4xlarge" : { "Arch" : "HVM64" }, "hs1.8xlarge" : { "Arch" : "HVM64" }, "cr1.8xlarge" : { "Arch" : "HVM64" }, "cc2.8xlarge" : { "Arch" : "HVM64" } },

"AWSInstanceType2NATArch" : {
  "t1.micro"    : { "Arch" : "NATPV64"   },
  "t2.nano"     : { "Arch" : "NATHVM64"  },
  "t2.micro"    : { "Arch" : "NATHVM64"  },
  "t2.small"    : { "Arch" : "NATHVM64"  },
  "t2.medium"   : { "Arch" : "NATHVM64"  },
  "t2.large"    : { "Arch" : "NATHVM64"  },
  "m1.small"    : { "Arch" : "NATPV64"   },
  "m1.medium"   : { "Arch" : "NATPV64"   },
  "m1.large"    : { "Arch" : "NATPV64"   },
  "m1.xlarge"   : { "Arch" : "NATPV64"   },
  "m2.xlarge"   : { "Arch" : "NATPV64"   },
  "m2.2xlarge"  : { "Arch" : "NATPV64"   },
  "m2.4xlarge"  : { "Arch" : "NATPV64"   },
  "m3.medium"   : { "Arch" : "NATHVM64"  },
  "m3.large"    : { "Arch" : "NATHVM64"  },
  "m3.xlarge"   : { "Arch" : "NATHVM64"  },
  "m3.2xlarge"  : { "Arch" : "NATHVM64"  },
  "m4.large"    : { "Arch" : "NATHVM64"  },
  "m4.xlarge"   : { "Arch" : "NATHVM64"  },
  "m4.2xlarge"  : { "Arch" : "NATHVM64"  },
  "m4.4xlarge"  : { "Arch" : "NATHVM64"  },
  "m4.10xlarge" : { "Arch" : "NATHVM64"  },
  "c1.medium"   : { "Arch" : "NATPV64"   },
  "c1.xlarge"   : { "Arch" : "NATPV64"   },
  "c3.large"    : { "Arch" : "NATHVM64"  },
  "c3.xlarge"   : { "Arch" : "NATHVM64"  },
  "c3.2xlarge"  : { "Arch" : "NATHVM64"  },
  "c3.4xlarge"  : { "Arch" : "NATHVM64"  },
  "c3.8xlarge"  : { "Arch" : "NATHVM64"  },
  "c4.large"    : { "Arch" : "NATHVM64"  },
  "c4.xlarge"   : { "Arch" : "NATHVM64"  },
  "c4.2xlarge"  : { "Arch" : "NATHVM64"  },
  "c4.4xlarge"  : { "Arch" : "NATHVM64"  },
  "c4.8xlarge"  : { "Arch" : "NATHVM64"  },
  "g2.2xlarge"  : { "Arch" : "NATHVMG2"  },
  "g2.8xlarge"  : { "Arch" : "NATHVMG2"  },
  "r3.large"    : { "Arch" : "NATHVM64"  },
  "r3.xlarge"   : { "Arch" : "NATHVM64"  },
  "r3.2xlarge"  : { "Arch" : "NATHVM64"  },
  "r3.4xlarge"  : { "Arch" : "NATHVM64"  },
  "r3.8xlarge"  : { "Arch" : "NATHVM64"  },
  "i2.xlarge"   : { "Arch" : "NATHVM64"  },
  "i2.2xlarge"  : { "Arch" : "NATHVM64"  },
  "i2.4xlarge"  : { "Arch" : "NATHVM64"  },
  "i2.8xlarge"  : { "Arch" : "NATHVM64"  },
  "d2.xlarge"   : { "Arch" : "NATHVM64"  },
  "d2.2xlarge"  : { "Arch" : "NATHVM64"  },
  "d2.4xlarge"  : { "Arch" : "NATHVM64"  },
  "d2.8xlarge"  : { "Arch" : "NATHVM64"  },
  "hi1.4xlarge" : { "Arch" : "NATHVM64"  },
  "hs1.8xlarge" : { "Arch" : "NATHVM64"  },
  "cr1.8xlarge" : { "Arch" : "NATHVM64"  },
  "cc2.8xlarge" : { "Arch" : "NATHVM64"  }
}

, "AWSRegionArch2AMI" : { "us-east-1" : {"PV64" : "ami-2a69aa47", "HVM64" : "ami-6869aa05", "HVMG2" : "ami-1f12e965"}, "us-west-2" : {"PV64" : "ami-7f77b31f", "HVM64" : "ami-7172b611", "HVMG2" : "ami-5c9b6124"}, "us-west-1" : {"PV64" : "ami-a2490dc2", "HVM64" : "ami-31490d51", "HVMG2" : "ami-7291a112"}, "eu-west-1" : {"PV64" : "ami-4cdd453f", "HVM64" : "ami-f9dd458a", "HVMG2" : "ami-b411c5cd"}, "eu-west-2" : {"PV64" : "NOT_SUPPORTED", "HVM64" : "ami-886369ec", "HVMG2" : "NOT_SUPPORTED"}, "eu-central-1" : {"PV64" : "ami-6527cf0a", "HVM64" : "ami-ea26ce85", "HVMG2" : "ami-be40f2d1"}, "ap-northeast-1" : {"PV64" : "ami-3e42b65f", "HVM64" : "ami-374db956", "HVMG2" : "ami-3efd2c58"}, "ap-northeast-2" : {"PV64" : "NOT_SUPPORTED", "HVM64" : "ami-2b408b45", "HVMG2" : "NOT_SUPPORTED"}, "ap-southeast-1" : {"PV64" : "ami-df9e4cbc", "HVM64" : "ami-a59b49c6", "HVMG2" : "ami-3e91ed5d"}, "ap-southeast-2" : {"PV64" : "ami-63351d00", "HVM64" : "ami-dc361ebf", "HVMG2" : "ami-84a142e6"}, "ap-south-1" : {"PV64" : "NOT_SUPPORTED", "HVM64" : "ami-ffbdd790", "HVMG2" : "ami-25ffbe4a"}, "us-east-2" : {"PV64" : "NOT_SUPPORTED", "HVM64" : "ami-f6035893", "HVMG2" : "NOT_SUPPORTED"}, "ca-central-1" : {"PV64" : "NOT_SUPPORTED", "HVM64" : "ami-730ebd17", "HVMG2" : "NOT_SUPPORTED"}, "sa-east-1" : {"PV64" : "ami-1ad34676", "HVM64" : "ami-6dd04501", "HVMG2" : "NOT_SUPPORTED"}, "cn-north-1" : {"PV64" : "ami-77559f1a", "HVM64" : "ami-8e6aa0e3", "HVMG2" : "NOT_SUPPORTED"}, "cn-northwest-1" : {"PV64" : "ami-80707be2", "HVM64" : "ami-cb858fa9", "HVMG2" : "NOT_SUPPORTED"} }

},

"Resources" : { "WebServerSecurityGroup" : { "Type" : "AWS::EC2::SecurityGroup", "Properties" : { "GroupDescription" : "Enable HTTP access via port 80 locked down to the load balancer + SSH access", "SecurityGroupIngress" : [ {"IpProtocol" : "tcp", "FromPort" : "80", "ToPort" : "80", "CidrIp" : "0.0.0.0/0"}, {"IpProtocol" : "tcp", "FromPort" : "22", "ToPort" : "22", "CidrIp" : { "Ref" : "SSHLocation"}} ] } },

"WebServer": {
  "Type" : "AWS::EC2::Instance",
  "Metadata" : {
    "AWS::CloudFormation::Init" : {
      "configSets" : {
        "wordpress_install" : ["install_cfn", "install_wordpress", "configure_wordpress" ]
      },
      "install_cfn" : {
        "files": {
          "/etc/cfn/cfn-hup.conf": {
            "content": { "Fn::Join": [ "", [
              "[main]\n",
              "stack=", { "Ref": "AWS::StackId" }, "\n",
              "region=", { "Ref": "AWS::Region" }, "\n"
            ]]},
            "mode"  : "000400",
            "owner" : "root",
            "group" : "root"
          },
          "/etc/cfn/hooks.d/cfn-auto-reloader.conf": {
            "content": { "Fn::Join": [ "", [
              "[cfn-auto-reloader-hook]\n",
              "triggers=post.update\n",
              "path=Resources.WebServer.Metadata.AWS::CloudFormation::Init\n",
              "action=/opt/aws/bin/cfn-init -v ",
                      "         --stack ", { "Ref" : "AWS::StackName" },
                      "         --resource WebServer ",
                      "         --configsets wordpress_install ",
                      "         --region ", { "Ref" : "AWS::Region" }, "\n"
            ]]},
            "mode"  : "000400",
            "owner" : "root",
            "group" : "root"
          }
        },
        "services" : {
          "sysvinit" : {
            "cfn-hup" : { "enabled" : "true", "ensureRunning" : "true",
                          "files" : ["/etc/cfn/cfn-hup.conf", "/etc/cfn/hooks.d/cfn-auto-reloader.conf"] }
          }
        }
      },

      "install_wordpress" : {
        "packages" : {
          "yum" : {
            "php"          : [],
            "php-mysql"    : [],
            "mysql"        : [],
            "mysql-server" : [],
            "mysql-devel"  : [],
            "mysql-libs"   : [],
            "httpd"        : []
          }
        },
        "sources" : {
          "/var/www/html" : "http://wordpress.org/latest.tar.gz"
        },
        "files" : {
          "/tmp/setup.mysql" : {
            "content" : { "Fn::Join" : ["", [
              "CREATE DATABASE ", { "Ref" : "DBName" }, ";\n",
              "CREATE USER '", { "Ref" : "DBUser" }, "'@'localhost' IDENTIFIED BY '", { "Ref" : "DBPassword" }, "';\n",
              "GRANT ALL ON ", { "Ref" : "DBName" }, ".* TO '", { "Ref" : "DBUser" }, "'@'localhost';\n",
              "FLUSH PRIVILEGES;\n"
            ]]},
            "mode"  : "000400",
            "owner" : "root",
            "group" : "root"
          },

          "/tmp/create-wp-config" : {
            "content" : { "Fn::Join" : [ "", [
              "#!/bin/bash -xe\n",
              "cp /var/www/html/wordpress/wp-config-sample.php /var/www/html/wordpress/wp-config.php\n",
              "sed -i \"s/'database_name_here'/'",{ "Ref" : "DBName" }, "'/g\" wp-config.php\n",
              "sed -i \"s/'username_here'/'",{ "Ref" : "DBUser" }, "'/g\" wp-config.php\n",
              "sed -i \"s/'password_here'/'",{ "Ref" : "DBPassword" }, "'/g\" wp-config.php\n"
            ]]},
            "mode" : "000500",
            "owner" : "root",
            "group" : "root"
          }
        },
        "services" : {
          "sysvinit" : {
            "httpd"  : { "enabled" : "true", "ensureRunning" : "true" },
            "mysqld" : { "enabled" : "true", "ensureRunning" : "true" }
          }
        }
      },

      "configure_wordpress" : {
        "commands" : {
          "01_set_mysql_root_password" : {
            "command" : { "Fn::Join" : ["", ["mysqladmin -u root password '", { "Ref" : "DBRootPassword" }, "'"]]},
            "test" : { "Fn::Join" : ["", ["$(mysql ", { "Ref" : "DBName" }, " -u root --password='", { "Ref" : "DBRootPassword" }, "' >/dev/null 2>&1 </dev/null); (( $? != 0 ))"]]}
          },
          "02_create_database" : {
            "command" : { "Fn::Join" : ["", ["mysql -u root --password='", { "Ref" : "DBRootPassword" }, "' < /tmp/setup.mysql"]]},
            "test" : { "Fn::Join" : ["", ["$(mysql ", { "Ref" : "DBName" }, " -u root --password='", { "Ref" : "DBRootPassword" }, "' >/dev/null 2>&1 </dev/null); (( $? != 0 ))"]]}
          },
          "03_configure_wordpress" : {
            "command" : "/tmp/create-wp-config",
            "cwd" : "/var/www/html/wordpress"
          }
        }
      }
    }
  },
  "Properties": {
    "ImageId" : { "Fn::FindInMap" : [ "AWSRegionArch2AMI", { "Ref" : "AWS::Region" },
                      { "Fn::FindInMap" : [ "AWSInstanceType2Arch", { "Ref" : "InstanceType" }, "Arch" ] } ] },
    "InstanceType"   : { "Ref" : "InstanceType" },
    "SecurityGroups" : [ {"Ref" : "WebServerSecurityGroup"} ],
    "KeyName"        : { "Ref" : "KeyName" },
    "UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
                   "#!/bin/bash -xe\n",
                   "yum update -y aws-cfn-bootstrap\n",

                   "/opt/aws/bin/cfn-init -v ",
                   "         --stack ", { "Ref" : "AWS::StackName" },
                   "         --resource WebServer ",
                   "         --configsets wordpress_install ",
                   "         --region ", { "Ref" : "AWS::Region" }, "\n",

                   "/opt/aws/bin/cfn-signal -e $? ",
                   "         --stack ", { "Ref" : "AWS::StackName" },
                   "         --resource WebServer ",
                   "         --region ", { "Ref" : "AWS::Region" }, "\n"
    ]]}}
  },
  "CreationPolicy" : {
    "ResourceSignal" : {
      "Timeout" : "PT15M"
    }
  }
}

},

"Outputs" : { "WebsiteURL" : { "Value" : { "Fn::Join" : ["", ["http://", { "Fn::GetAtt" : [ "WebServer", "PublicDnsName" ]}, "/wordpress" ]]}, "Description" : "WordPress Website" } } }

#+END_EXAMPLE ** DONE example for cloudformation template CLOSED: [2015-05-05 Tue 11:25] http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/example-templates-autoscaling.html Auto Scaling Multi-AZ Template #+BEGIN_EXAMPLE { "AWSTemplateFormatVersion" : "2010-09-09",

"Description" : "AWS CloudFormation Sample Template AutoScalingMultiAZWithNotifications: Create a multi-az, load balanced and Auto Scaled sample web site running on an Apache Web Serever. The application is configured to span all Availability Zones in the region and is Auto-Scaled based on the CPU utilization of the web servers. Notifications will be sent to the operator email address on scaling events. The instances are load balanced with a simple health check against the default web page. WARNING This template creates one or more Amazon EC2 instances and an Elastic Load Balancer. You will be billed for the AWS resources used if you create a stack from this template.",

"Parameters" : { "InstanceType" : { "Description" : "WebServer EC2 instance type", "Type" : "String", "Default" : "m1.small", "AllowedValues" : [ "t1.micro", "t2.micro", "t2.small", "t2.medium", "m1.small", "m1.medium", "m1.large", "m1.xlarge", "m2.xlarge", "m2.2xlarge", "m2.4xlarge", "m3.medium", "m3.large", "m3.xlarge", "m3.2xlarge", "c1.medium", "c1.xlarge", "c3.large", "c3.xlarge", "c3.2xlarge", "c3.4xlarge", "c3.8xlarge", "g2.2xlarge", "r3.large", "r3.xlarge", "r3.2xlarge", "r3.4xlarge", "r3.8xlarge", "i2.xlarge", "i2.2xlarge", "i2.4xlarge", "i2.8xlarge", "hi1.4xlarge", "hs1.8xlarge", "cr1.8xlarge", "cc2.8xlarge", "cg1.4xlarge"], "ConstraintDescription" : "must be a valid EC2 instance type." },

"OperatorEMail": {
  "Description": "EMail address to notify if there are any scaling operations",
  "Type": "String",
  "AllowedPattern": "([a-zA-Z0-9_\\-\\.]+)@((\\[[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.)|(([a-zA-Z0-9\\-]+\\.)+))([a-zA-Z]{2,4}|[0-9]{1,3})(\\]?)",
  "ConstraintDescription": "must be a valid email address."
},

"KeyName" : {
  "Description" : "The EC2 Key Pair to allow SSH access to the instances",
  "Type" : "AWS::EC2::KeyPair::KeyName",
  "ConstraintDescription" : "must be the name of an existing EC2 KeyPair."
},

"SSHLocation" : {
  "Description" : "The IP address range that can be used to SSH to the EC2 instances",
  "Type": "String",
  "MinLength": "9",
  "MaxLength": "18",
  "Default": "0.0.0.0/0",
  "AllowedPattern": "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})",
  "ConstraintDescription": "must be a valid IP CIDR range of the form x.x.x.x/x."
}

},

"Mappings" : { "AWSInstanceType2Arch" : { "t1.micro" : { "Arch" : "PV64" }, "t2.micro" : { "Arch" : "HVM64" }, "t2.small" : { "Arch" : "HVM64" }, "t2.medium" : { "Arch" : "HVM64" }, "m1.small" : { "Arch" : "PV64" }, "m1.medium" : { "Arch" : "PV64" }, "m1.large" : { "Arch" : "PV64" }, "m1.xlarge" : { "Arch" : "PV64" }, "m2.xlarge" : { "Arch" : "PV64" }, "m2.2xlarge" : { "Arch" : "PV64" }, "m2.4xlarge" : { "Arch" : "PV64" }, "m3.medium" : { "Arch" : "HVM64" }, "m3.large" : { "Arch" : "HVM64" }, "m3.xlarge" : { "Arch" : "HVM64" }, "m3.2xlarge" : { "Arch" : "HVM64" }, "c1.medium" : { "Arch" : "PV64" }, "c1.xlarge" : { "Arch" : "PV64" }, "c3.large" : { "Arch" : "HVM64" }, "c3.xlarge" : { "Arch" : "HVM64" }, "c3.2xlarge" : { "Arch" : "HVM64" }, "c3.4xlarge" : { "Arch" : "HVM64" }, "c3.8xlarge" : { "Arch" : "HVM64" }, "g2.2xlarge" : { "Arch" : "HVMG2" }, "r3.large" : { "Arch" : "HVM64" }, "r3.xlarge" : { "Arch" : "HVM64" }, "r3.2xlarge" : { "Arch" : "HVM64" }, "r3.4xlarge" : { "Arch" : "HVM64" }, "r3.8xlarge" : { "Arch" : "HVM64" }, "i2.xlarge" : { "Arch" : "HVM64" }, "i2.2xlarge" : { "Arch" : "HVM64" }, "i2.4xlarge" : { "Arch" : "HVM64" }, "i2.8xlarge" : { "Arch" : "HVM64" }, "hi1.4xlarge" : { "Arch" : "HVM64" }, "hs1.8xlarge" : { "Arch" : "HVM64" }, "cr1.8xlarge" : { "Arch" : "HVM64" }, "cc2.8xlarge" : { "Arch" : "HVM64" } },

"AWSRegionArch2AMI" : {
  "us-east-1"      : { "PV64" : "ami-50842d38", "HVM64" : "ami-08842d60", "HVMG2" : "ami-3a329952"  },
  "us-west-2"      : { "PV64" : "ami-af86c69f", "HVM64" : "ami-8786c6b7", "HVMG2" : "ami-47296a77"  },
  "us-west-1"      : { "PV64" : "ami-c7a8a182", "HVM64" : "ami-cfa8a18a", "HVMG2" : "ami-331b1376"  },
  "eu-west-1"      : { "PV64" : "ami-aa8f28dd", "HVM64" : "ami-748e2903", "HVMG2" : "ami-00913777"  },
  "ap-southeast-1" : { "PV64" : "ami-20e1c572", "HVM64" : "ami-d6e1c584", "HVMG2" : "ami-fabe9aa8"  },
  "ap-northeast-1" : { "PV64" : "ami-21072820", "HVM64" : "ami-35072834", "HVMG2" : "ami-5dd1ff5c"  },
  "ap-southeast-2" : { "PV64" : "ami-8b4724b1", "HVM64" : "ami-fd4724c7", "HVMG2" : "ami-e98ae9d3"  },
  "sa-east-1"      : { "PV64" : "ami-9d6cc680", "HVM64" : "ami-956cc688", "HVMG2" : "NOT_SUPPORTED" },
  "cn-north-1"     : { "PV64" : "ami-a857c591", "HVM64" : "ami-ac57c595", "HVMG2" : "NOT_SUPPORTED" },
  "eu-central-1"   : { "PV64" : "ami-a03503bd", "HVM64" : "ami-b43503a9", "HVMG2" : "ami-b03503ad"  }
}

},

"Resources" : { "NotificationTopic": { "Type": "AWS::SNS::Topic", "Properties": { "Subscription": [ { "Endpoint": { "Ref": "OperatorEMail" }, "Protocol": "email" } ] } },

"WebServerGroup" : {
  "Type" : "AWS::AutoScaling::AutoScalingGroup",
  "Properties" : {
    "AvailabilityZones" : { "Fn::GetAZs" : ""},
    "LaunchConfigurationName" : { "Ref" : "LaunchConfig" },
    "MinSize" : "1",
    "MaxSize" : "3",
    "LoadBalancerNames" : [ { "Ref" : "ElasticLoadBalancer" } ],
    "NotificationConfigurations" : [{
      "TopicARN" : { "Ref" : "NotificationTopic" },
      "NotificationTypes" : [ "autoscaling:EC2_INSTANCE_LAUNCH",
                              "autoscaling:EC2_INSTANCE_LAUNCH_ERROR",
                              "autoscaling:EC2_INSTANCE_TERMINATE",
                              "autoscaling:EC2_INSTANCE_TERMINATE_ERROR"]
    }]
  },
  "CreationPolicy" : {
    "ResourceSignal" : {
      "Timeout" : "PT15M",
      "Count"   : "1"
    }
  },
  "UpdatePolicy": {
    "AutoScalingRollingUpdate": {
      "MinInstancesInService": "1",
      "MaxBatchSize": "1",
      "PauseTime" : "PT15M",
      "WaitOnResourceSignals": "true"
    }
  }
},

"LaunchConfig" : {
  "Type" : "AWS::AutoScaling::LaunchConfiguration",
  "Metadata" : {
    "Comment" : "Install a simple application",
    "AWS::CloudFormation::Init" : {
      "config" : {
        "packages" : {
          "yum" : {
            "httpd" : []
          }
        },

        "files" : {
          "/var/www/html/index.html" : {
            "content" : { "Fn::Join" : ["\n", [
              "<img src=\"https://s3.amazonaws.com/cloudformation-examples/cloudformation_graphic.png\" alt=\"AWS CloudFormation Logo\"/>",
              "<h1>Congratulations, you have successfully launched the AWS CloudFormation sample.</h1>"
            ]]},
            "mode"    : "000644",
            "owner"   : "root",
            "group"   : "root"
          },

          "/etc/cfn/cfn-hup.conf" : {
            "content" : { "Fn::Join" : ["", [
              "[main]\n",
              "stack=", { "Ref" : "AWS::StackId" }, "\n",
              "region=", { "Ref" : "AWS::Region" }, "\n"
            ]]},
            "mode"    : "000400",
            "owner"   : "root",
            "group"   : "root"
          },

          "/etc/cfn/hooks.d/cfn-auto-reloader.conf" : {
            "content": { "Fn::Join" : ["", [
              "[cfn-auto-reloader-hook]\n",
              "triggers=post.update\n",
              "path=Resources.LaunchConfig.Metadata.AWS::CloudFormation::Init\n",
              "action=/opt/aws/bin/cfn-init -v ",
              "         --stack ", { "Ref" : "AWS::StackName" },
              "         --resource LaunchConfig ",
              "         --region ", { "Ref" : "AWS::Region" }, "\n",
              "runas=root\n"
            ]]}
          }
        },

        "services" : {
          "sysvinit" : {
            "httpd"    : { "enabled" : "true", "ensureRunning" : "true" },
            "cfn-hup" : { "enabled" : "true", "ensureRunning" : "true",
                          "files" : ["/etc/cfn/cfn-hup.conf", "/etc/cfn/hooks.d/cfn-auto-reloader.conf"]}
          }
        }
      }
    }
  },
  "Properties" : {
    "KeyName" : { "Ref" : "KeyName" },
    "ImageId" : { "Fn::FindInMap" : [ "AWSRegionArch2AMI", { "Ref" : "AWS::Region" },
                                      { "Fn::FindInMap" : [ "AWSInstanceType2Arch", { "Ref" : "InstanceType" }, "Arch" ] } ] },
    "SecurityGroups" : [ { "Ref" : "InstanceSecurityGroup" } ],
    "InstanceType" : { "Ref" : "InstanceType" },
    "UserData"       : { "Fn::Base64" : { "Fn::Join" : ["", [
         "#!/bin/bash -xe\n",
         "yum update -y aws-cfn-bootstrap\n",

         "/opt/aws/bin/cfn-init -v ",
         "         --stack ", { "Ref" : "AWS::StackName" },
         "         --resource LaunchConfig ",
         "         --region ", { "Ref" : "AWS::Region" }, "\n",

         "/opt/aws/bin/cfn-signal -e $? ",
         "         --stack ", { "Ref" : "AWS::StackName" },
         "         --resource WebServerGroup ",
         "         --region ", { "Ref" : "AWS::Region" }, "\n"
    ]]}}
  }
},

"WebServerScaleUpPolicy" : {
  "Type" : "AWS::AutoScaling::ScalingPolicy",
  "Properties" : {
    "AdjustmentType" : "ChangeInCapacity",
    "AutoScalingGroupName" : { "Ref" : "WebServerGroup" },
    "Cooldown" : "60",
    "ScalingAdjustment" : "1"
  }
},
"WebServerScaleDownPolicy" : {
  "Type" : "AWS::AutoScaling::ScalingPolicy",
  "Properties" : {
    "AdjustmentType" : "ChangeInCapacity",
    "AutoScalingGroupName" : { "Ref" : "WebServerGroup" },
    "Cooldown" : "60",
    "ScalingAdjustment" : "-1"
  }
},

"CPUAlarmHigh": {
 "Type": "AWS::CloudWatch::Alarm",
 "Properties": {
    "AlarmDescription": "Scale-up if CPU > 90% for 10 minutes",
    "MetricName": "CPUUtilization",
    "Namespace": "AWS/EC2",
    "Statistic": "Average",
    "Period": "300",
    "EvaluationPeriods": "2",
    "Threshold": "90",
    "AlarmActions": [ { "Ref": "WebServerScaleUpPolicy" } ],
    "Dimensions": [
      {
        "Name": "AutoScalingGroupName",
        "Value": { "Ref": "WebServerGroup" }
      }
    ],
    "ComparisonOperator": "GreaterThanThreshold"
  }
},
"CPUAlarmLow": {
 "Type": "AWS::CloudWatch::Alarm",
 "Properties": {
    "AlarmDescription": "Scale-down if CPU < 70% for 10 minutes",
    "MetricName": "CPUUtilization",
    "Namespace": "AWS/EC2",
    "Statistic": "Average",
    "Period": "300",
    "EvaluationPeriods": "2",
    "Threshold": "70",
    "AlarmActions": [ { "Ref": "WebServerScaleDownPolicy" } ],
    "Dimensions": [
      {
        "Name": "AutoScalingGroupName",
        "Value": { "Ref": "WebServerGroup" }
      }
    ],
    "ComparisonOperator": "LessThanThreshold"
  }
},

"ElasticLoadBalancer" : {
  "Type" : "AWS::ElasticLoadBalancing::LoadBalancer",
  "Properties" : {
    "AvailabilityZones" : { "Fn::GetAZs" : "" },
    "CrossZone" : "true",
    "Listeners" : [ {
      "LoadBalancerPort" : "80",
      "InstancePort" : "80",
      "Protocol" : "HTTP"
    } ],
    "HealthCheck" : {
      "Target" : "HTTP:80/",
      "HealthyThreshold" : "3",
      "UnhealthyThreshold" : "5",
      "Interval" : "30",
      "Timeout" : "5"
    }
  }
},

"InstanceSecurityGroup" : {
  "Type" : "AWS::EC2::SecurityGroup",
  "Properties" : {
    "GroupDescription" : "Enable SSH access and HTTP from the load balancer only",
    "SecurityGroupIngress" : [ {
      "IpProtocol" : "tcp",
      "FromPort" : "22",
      "ToPort" : "22",
      "CidrIp" : { "Ref" : "SSHLocation"}
    },
    {
      "IpProtocol" : "tcp",
      "FromPort" : "80",
      "ToPort" : "80",
      "SourceSecurityGroupOwnerId" : {"Fn::GetAtt" : ["ElasticLoadBalancer", "SourceSecurityGroup.OwnerAlias"]},
      "SourceSecurityGroupName" : {"Fn::GetAtt" : ["ElasticLoadBalancer", "SourceSecurityGroup.GroupName"]}
    } ]
  }
}

},

"Outputs" : { "URL" : { "Description" : "The URL of the website", "Value" : { "Fn::Join" : [ "", [ "http://", { "Fn::GetAtt" : [ "ElasticLoadBalancer", "DNSName" ]}]]} } } } #+END_EXAMPLE ** DONE CloudFormation provides a set of application bootstrapping scripts which enables the user to install software CLOSED: [2015-05-05 Tue 11:37] http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-helper-scripts-reference.html

https://s3.amazonaws.com/cloudformation-examples/BoostrappingApplicationsWithAWSCloudFormation.pdf

AWS CloudFormation provides a set of Python helper scripts that you can use to install software and start services on an Amazon EC2 instance

AWS CloudFormation provides the following helpers | Helper | Summary | |------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | cfn-init | Used to retrieve and interpret the resource metadata, installing packages, creating files and starting services. | | cfn-signal | A simple wrapper to signal an AWS CloudFormation CreationPolicy or WaitCondition, enabling you to synchronize other resources in the stack with the application being ready. | | cfn-get-metadata | A wrapper script making it easy to retrieve either all metadata defined for a resource or path to a specific key or subtree of the resource metadata. | | cfn-hup | A daemon to check for updates to metadata and execute custom hooks when the changes are detected. | *** question A user is planning to use AWS Cloudformation. Which of the below mentioned functionalities does not help him to correctly understand Cloudformation?

Cloudformation works with a wide variety of AWS services, such as EC2, EBS, VPC, IAM, S3, RDS, ELB, etc AWS Cloudfromation does not charge the user for its service but only charges for the AWS resources created with it CloudFormation provides a set of application bootstrapping scripts which enables the user to install software Cloudformation follows the DevOps model for the creation of Dev & Test

D

AWS Cloudformation is an application management tool which provides application modelling, deployment, configuration, management and related activities. It supports a wide variety of AWS services, such as EC2, EBS, AS, ELB, RDS, VPC, etc. It also provides application bootstrapping scripts which enable the user to install software packages or create folders. It is free of the cost and only charges the user for the services created with it. The only challenge is that it does not follow any model, such as DevOps; instead customers can define templates and use them to provision and manage the AWS resources in an orderly way. http://aws.amazon.com/cloudformation/faqs/ ** DONE CloudFormation required steps question CLOSED: [2015-05-05 Tue 11:37] A customer is using AWS for Dev and Test. The customer wants to setup the Dev environment with Cloudformation. Which of the below mentioned steps are not required while using Cloudformation?

Create a stack Create and upload the template Configure a service Provide the parameters configured as part of the template

C

AWS Cloudformation is an application management tool which provides application modelling, deployment, configuration, management and related activities. AWS CloudFormation introduces two concepts: the template and the stack. The template is a JSON-format, text-based file that describes all the AWS resources required to deploy and run an application. The stack is a collection of AWS resources which are created and managed as a single unit when AWS CloudFormation instantiates a template. While creating a stack, the user uploads the template and provides the data for the parameters if required. http://aws.amazon.com/cloudformation/faqs/ ** # --8<-------------------------- separator ------------------------>8-- ** TODO [#A] What resource CloudFormation supports https://aws.amazon.com/cloudformation/faqs/ AWS CloudFormation currently supports: Auto Scaling Amazon CloudFront AWS CloudTrail AWS CloudWatch Amazon DynamoDB Amazon EC2 Amazon ElastiCache AWS Elastic Beanstalk AWS Elastic Load Balancing Amazon Kinesis AWS Identity and Access Management AWS OpsWorks Amazon RDS Amazon Redshift Amazon Route 53 Amazon S3 Amazon SimpleDB Amazon SNS Amazon SQS Amazon VPC

#+BEGIN_EXAMPLE You have been handed a major project with one of the caveats being that you need to use Cloudformation and also include the following services: Auto Scaling, Amazon CloudFront, AWS Elastic Load Balancing, Amazon RDS, Amazon Kinesis ,Amazon Glacier and Amazon ElastiCache. You find out however that one of these services is not supported by Cloudformation. Which one?

Amazon Glacier Auto Scaling Amazon ElastiCache Amazon RDS

A #+END_EXAMPLE ** TODO [#A] What's the mandatory elements for CloudFormation definition ** TODO [#A] CloudFormation procedure to limit downtime question You work for a startup that has developed a new photo-sharing application for mobile devices. Over recent months your application has increased in popularity; this has resulted in a decrease in the performance of the application due to the increased load. Your application has a two-tier architecture that is composed of an Auto Scaling PHP application tier and a MySQL RDS instance initially deployed with AWS CloudFormation. Your Auto Scaling group has a min value of 4 and a max value of 8. The desired capacity is now at 8 due to the high CPU utilization of the instances. After some analysis, you are confident that the performance issues stem from a constraint in CPU capacity, while memory utilization remains low. You therefore decide to move from the general-purpose M3 instances to the compute-optimized C3 instances. How would you deploy this change while minimizing any interruption to your end users? A) Sign into the AWS Management Console, copy the old launch configuration, and create a new launch configuration that specifies the C3 instances. Update the Auto Scaling group with the new launch configuration. Auto Scaling will then update the instance type of all running instances B) Sign into the AWS Management Console and update the existing launch configuration with the new C3 instance type. Add an UpdatePolicy attribute to your Auto Scaling group that specifies an AutoScaling RollingUpdate. C) Update the launch configuration specified in the AWS CloudFormation template with the new C3 instance type. Run a stack update with the new template. Auto Scaling will then update the instances with the new instance type. D) Update the launch configuration specified in the AWS CloudFormation template with the new C3 instance type. Also add an UpdatePolicy attribute to your Auto Scaling group that specifies an AutoScalingRollingUpdate. Run a stack update with the new template.

A ** TODO [#B] CloudFormation definition question You have a complex system involving networking, IAM policies, and multiple, three-tier applications. You are still receiving requirements for the new system, so you don't yet know how many AWS components will be present in the final design. You would like to start defining these AWS resources using AWS CloudFormation so that you can automate and version-control your infrastructure. How would you use AWS CloudFormation to provide agile new environments for your customers in a cost-effective, reliable manner? A) Create one single template by hand to encompass all resources that you need for the system, so you only have a single template to version-control. B) Create multiple separate templates for each logical part of the system, create nested stacks in AWS CloudFormation and maintain several templates to version-control. C) Create multiple separate templates for each logical part of the system, and provide the outputs from one to the next using an Amazon Elastic Compute Cloud (EC2) Instance running the SDK for finer granularity of control. D) Manually construct the networking layer using Amazon Virtual Private Cloud (VPC) because this does not change often and then define all other ephemeral resources using AWS CloudFormation. ** TODO [#A] Metadata: AWS::CloudFormation::Init #+BEGIN_EXAMPLE "WebServer": { "Type" : "AWS::EC2::Instance", "Metadata" : { "AWS::CloudFormation::Init" : { "configSets" : { "wordpress_install" : ["install_cfn", "install_wordpress", "configure_wordpress" ] }, "install_cfn" : { "files": { "/etc/cfn/cfn-hup.conf": { "content": { "Fn::Join": [ "", [ "[main]\n", "stack=", { "Ref": "AWS::StackId" }, "\n", "region=", { "Ref": "AWS::Region" }, "\n" ]]}, "mode" : "000400", "owner" : "root", "group" : "root" }, "/etc/cfn/hooks.d/cfn-auto-reloader.conf": { "content": { "Fn::Join": [ "", [ "[cfn-auto-reloader-hook]\n", "triggers=post.update\n", "path=Resources.WebServer.Metadata.AWS::CloudFormation::Init\n", "action=/opt/aws/bin/cfn-init -v ", " --stack ", { "Ref" : "AWS::StackName" }, " --resource WebServer ", " --configsets wordpress_install ", " --region ", { "Ref" : "AWS::Region" }, "\n" ]]}, "mode" : "000400", "owner" : "root", "group" : "root" } }, "services" : { "sysvinit" : { "cfn-hup" : { "enabled" : "true", "ensureRunning" : "true", "files" : ["/etc/cfn/cfn-hup.conf", "/etc/cfn/hooks.d/cfn-auto-reloader.conf"] } } } },

      "install_wordpress" : {
        "packages" : {
          "yum" : {
            "php"          : [],
            "php-mysql"    : [],
            "mysql"        : [],
            "mysql-server" : [],
            "mysql-devel"  : [],
            "mysql-libs"   : [],
            "httpd"        : []
          }
        },
        "sources" : {
          "/var/www/html" : "http://wordpress.org/latest.tar.gz"
        },
        "files" : {
          "/tmp/setup.mysql" : {
            "content" : { "Fn::Join" : ["", [
              "CREATE DATABASE ", { "Ref" : "DBName" }, ";\n",
              "CREATE USER '", { "Ref" : "DBUser" }, "'@'localhost' IDENTIFIED BY '", { "Ref" : "DBPassword" }, "';\n",
              "GRANT ALL ON ", { "Ref" : "DBName" }, ".* TO '", { "Ref" : "DBUser" }, "'@'localhost';\n",
              "FLUSH PRIVILEGES;\n"
            ]]},
            "mode"  : "000400",
            "owner" : "root",
            "group" : "root"
          },

          "/tmp/create-wp-config" : {
            "content" : { "Fn::Join" : [ "", [
              "#!/bin/bash -xe\n",
              "cp /var/www/html/wordpress/wp-config-sample.php /var/www/html/wordpress/wp-config.php\n",
              "sed -i \"s/'database_name_here'/'",{ "Ref" : "DBName" }, "'/g\" wp-config.php\n",
              "sed -i \"s/'username_here'/'",{ "Ref" : "DBUser" }, "'/g\" wp-config.php\n",
              "sed -i \"s/'password_here'/'",{ "Ref" : "DBPassword" }, "'/g\" wp-config.php\n"
            ]]},
            "mode" : "000500",
            "owner" : "root",
            "group" : "root"
          }
        },
        "services" : {
          "sysvinit" : {
            "httpd"  : { "enabled" : "true", "ensureRunning" : "true" },
            "mysqld" : { "enabled" : "true", "ensureRunning" : "true" }
          }
        }
      },

      "configure_wordpress" : {
        "commands" : {
          "01_set_mysql_root_password" : {
            "command" : { "Fn::Join" : ["", ["mysqladmin -u root password '", { "Ref" : "DBRootPassword" }, "'"]]},
            "test" : { "Fn::Join" : ["", ["$(mysql ", { "Ref" : "DBName" }, " -u root --password='", { "Ref" : "DBRootPassword" }, "' >/dev/null 2>&1 </dev/null); (( $? != 0 ))"]]}
          },
          "02_create_database" : {
            "command" : { "Fn::Join" : ["", ["mysql -u root --password='", { "Ref" : "DBRootPassword" }, "' < /tmp/setup.mysql"]]},
            "test" : { "Fn::Join" : ["", ["$(mysql ", { "Ref" : "DBName" }, " -u root --password='", { "Ref" : "DBRootPassword" }, "' >/dev/null 2>&1 </dev/null); (( $? != 0 ))"]]}
          },
          "03_configure_wordpress" : {
            "command" : "/tmp/create-wp-config",
            "cwd" : "/var/www/html/wordpress"
          }
        }
      }
    }
  },
  "Properties": {
    "ImageId" : { "Fn::FindInMap" : [ "AWSRegionArch2AMI", { "Ref" : "AWS::Region" },
                      { "Fn::FindInMap" : [ "AWSInstanceType2Arch", { "Ref" : "InstanceType" }, "Arch" ] } ] },
    "InstanceType"   : { "Ref" : "InstanceType" },
    "SecurityGroups" : [ {"Ref" : "WebServerSecurityGroup"} ],
    "KeyName"        : { "Ref" : "KeyName" },
    "UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
                   "#!/bin/bash -xe\n",
                   "yum update -y aws-cfn-bootstrap\n",

                   "/opt/aws/bin/cfn-init -v ",
                   "         --stack ", { "Ref" : "AWS::StackName" },
                   "         --resource WebServer ",
                   "         --configsets wordpress_install ",
                   "         --region ", { "Ref" : "AWS::Region" }, "\n",

                   "/opt/aws/bin/cfn-signal -e $? ",
                   "         --stack ", { "Ref" : "AWS::StackName" },
                   "         --resource WebServer ",
                   "         --region ", { "Ref" : "AWS::Region" }, "\n"
    ]]}}
  },
  "CreationPolicy" : {
    "ResourceSignal" : {
      "Timeout" : "PT15M"
    }
  }
}

#+END_EXAMPLE ** # --8<-------------------------- separator ------------------------>8-- :noexport: ** TODO cf create role ** TODO How does CF rollback work? ** # --8<-------------------------- separator ------------------------>8-- :noexport: ** DONE [#A] cloudformation types CLOSED: [2017-11-13 Mon 15:42] https://aws.amazon.com/blogs/devops/using-the-new-cloudformation-parameter-types/ #+BEGIN_EXAMPLE CloudFormation currently supports the following parameter types:

String - A literal string Number - An integer or float List<Number> - An array of integers or floats CommaDelimitedList - An array of literal strings that are separated by commas AWS::EC2::KeyPair::KeyName - An Amazon EC2 key pair name AWS::EC2::SecurityGroup::Id - A security group ID AWS::EC2::Subnet::Id - A subnet ID AWS::EC2::VPC::Id - A VPC ID ListAWS::EC2::VPC::Id - An array of VPC IDs ListAWS::EC2::SecurityGroup::Id - An array of security group IDs ListAWS::EC2::Subnet::Id - An array of subnet IDs #+END_EXAMPLE ** DONE cloudformation use map to give users limited choice list CLOSED: [2017-11-13 Mon 17:48] https://github.com/awslabs/startup-kit-templates/blob/07aa16757de351b26d299756e6950ca02b4afa9a/app.cfn.yml#L114-L132 #+BEGIN_EXAMPLE StackType: Description: node, rails, python, python3 or spring. Type: String MinLength: 1 MaxLength: 255 AllowedValues: - node - rails - spring - python - python3 ConstraintDescription: Specify node, rails, python, python3 or spring. #+END_EXAMPLE

#+BEGIN_EXAMPLE Mappings:

Maps stack type parameter to solution stack name string

StackMap: node: stackName: 64bit Amazon Linux 2017.03 v4.3.0 running Node.js rails: stackName: 64bit Amazon Linux 2017.03 v2.4.4 running Ruby 2.3 (Puma) spring: stackName: 64bit Amazon Linux 2017.03 v2.6.4 running Tomcat 8 Java 8 python: stackName: 64bit Amazon Linux 2017.03 v2.5.1 running Python 2.7 python3: stackName: 64bit Amazon Linux 2017.03 v2.5.1 running Python 3.4 #+END_EXAMPLE

#+BEGIN_EXAMPLE ConfigurationTemplate: Type: AWS::ElasticBeanstalk::ConfigurationTemplate Properties: ApplicationName: !Ref Application SolutionStackName: !FindInMap [ StackMap, !Ref StackType, stackName ] OptionSettings: #+END_EXAMPLE ** DONE Fn::Sub: substitutes variables in an input string with values that you specify. CLOSED: [2017-11-13 Mon 17:58] http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html #+BEGIN_EXAMPLE Copy Name: !Sub

  • www.${Domain}
  • { Domain: !Ref RootDomainName } #+END_EXAMPLE ** DONE Use cloudformation to create an EC2 instance CLOSED: [2017-11-13 Mon 18:09] https://github.com/awslabs/startup-kit-templates/blob/master/bastion.cfn.yml ** DONE Cloudformation use env in userdata CLOSED: [2017-11-14 Tue 08:21] Denny Zhang (Github . Blogger) [8:19 AM]

{ "Ref" : "AWS::StackName" }

${AWS::StackName}

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/quickref-general.html

#+BEGIN_EXAMPLE JSON

Copy "UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [ "#!/bin/bash -xe\n", "yum install -y aws-cfn-bootstrap\n",

         "/opt/aws/bin/cfn-init -v ",
         "         --stack ", { "Ref" : "AWS::StackName" },
         "         --resource LaunchConfig ",
         "         --region ", { "Ref" : "AWS::Region" }, "\n",

         "/opt/aws/bin/cfn-signal -e $? ",
         "         --stack ", { "Ref" : "AWS::StackName" },
         "         --resource WebServerGroup ",
         "         --region ", { "Ref" : "AWS::Region" }, "\n"
    ]]}}
  }

YAML

Copy UserData: Fn::Base64: !Sub | #!/bin/bash -xe yum update -y aws-cfn-bootstrap /opt/aws/bin/cfn-init -v --stack ${AWS::StackName} --resource LaunchConfig --region ${AWS::Region} /opt/aws/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource WebServerGroup --region ${AWS::Region} #+END_EXAMPLE ** DONE Check details of AWS UserData: /var/log/cfn-init.log CLOSED: [2017-11-14 Tue 09:38] #+BEGIN_EXAMPLE [ec2-user@ip-172-31-31-80 log]$ grep docker_start_jenkins.sh ./ -r grep: ./cron: Permission denied ./cloud-init-output.log:Error occurred during build: Unsupported source file (not zip or tarball): https://raw.githubusercontent.com/DennyZhang/aws-jenkins-study/master/utility/bash-scripts/docker_start_jenkins.sh grep: ./tallylog: Permission denied grep: ./boot.log: Permission denied grep: ./btmp: Permission denied grep: ./maillog: Permission denied ./cfn-init.log:2017-11-14 15:35:42,328 [ERROR] Error encountered during build of install_jenkins: Unsupported source file (not zip or tarball): https://raw.githubusercontent.com/DennyZhang/aws-jenkins-study/master/utility/bash-scripts/docker_start_jenkins.sh ./cfn-init.log:ToolError: Unsupported source file (not zip or tarball): https://raw.githubusercontent.com/DennyZhang/aws-jenkins-study/master/utility/bash-scripts/docker_start_jenkins.sh ./cfn-init.log:2017-11-14 15:35:42,330 [ERROR] Unhandled exception during build: Unsupported source file (not zip or tarball): https://raw.githubusercontent.com/DennyZhang/aws-jenkins-study/master/utility/bash-scripts/docker_start_jenkins.sh ./cfn-init.log:ToolError: Unsupported source file (not zip or tarball): https://raw.githubusercontent.com/DennyZhang/aws-jenkins-study/master/utility/bash-scripts/docker_start_jenkins.sh grep: ./audit: Permission denied grep: ./messages: Permission denied grep: ./secure: Permission denied grep: ./mail/statistics: Permission denied grep: ./spooler: Permission denied grep: ./yum.log: Permission denied ./cfn-wire.log:2017-11-14 15:35:42,314 [DEBUG] Response: 200 https://raw.githubusercontent.com/DennyZhang/aws-jenkins-study/master/utility/bash-scripts/docker_start_jenkins.sh [headers: {'content-length': '807', 'via': '1.1 varnish', 'vary': 'Authorization,Accept-Encoding', 'strict-transport-security': 'max-age=31536000', 'x-content-type-options': 'nosniff', 'etag': '"22045449c1be0a724e3ff322aff06935d30e9adb"', 'x-cache-hits': '0', 'cache-control': 'max-age=300', 'source-age': '0', 'x-served-by': 'cache-iad2631-IAD', 'x-cache': 'MISS', 'x-github-request-id': '8AFE:2988:9F06E:ABA1B:5A0B0D4D', 'accept-ranges': 'bytes', 'expires': 'Tue, 14 Nov 2017 15:40:42 GMT', 'x-xss-protection': '1; mode=block', 'x-geo-block-list': '', 'date': 'Tue, 14 Nov 2017 15:35:42 GMT', 'access-control-allow-origin': '*', 'content-security-policy': "default-src 'none'; style-src 'unsafe-inline'; sandbox", 'content-encoding': 'gzip', 'x-timer': 'S1510673742.287671,VS0,VE24', 'connection': 'keep-alive', 'x-fastly-request-id': 'ece6c891e65d5271e9b1d9450e00c8ab27a27204', 'x-frame-options': 'deny', 'content-type': 'text/plain; charset=utf-8'}] #+END_EXAMPLE ** DONE Download single files from GitHub in CloudFormation template CLOSED: [2017-11-14 Tue 09:56] https://serverfault.com/questions/456812/download-single-files-from-github-in-cloudformation-template https://forums.aws.amazon.com/thread.jspa?threadID=111736&tstart=0 #+BEGIN_EXAMPLE files: # TODO: create folder /home/ec2-user/install_docker.sh: source: https://raw.githubusercontent.com/DennyZhang/aws-jenkins-study/master/utility/bash-scripts/install_docker.sh mode: 755 owner: ec2-user group: ec2-user #+END_EXAMPLE ** # --8<-------------------------- separator ------------------------>8-- :noexport: ** DONE start the stack from command line CLOSED: [2017-11-14 Tue 11:48] export stack_name="docker-cf-jenkins" export tmp_file="file://cf-denny-jenkins-docker-aio.yml" aws cloudformation create-stack --template-body "$tmp_file"
--stack-name "$stack_name" --parameters
ParameterKey=JenkinsUser,ParameterValue=username
ParameterKey=JenkinsPassword,ParameterValue=mypassword
ParameterKey=KeyName,ParameterValue=denny-ssh-key1

aws cloudformation delete-stack --stack-name "$stack_name" ** DONE cloudformation init test command: If the test passes, cfn-init runs the commands. CLOSED: [2017-11-14 Tue 12:03] http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html ** DONE aws t1.micro vs t2.micro CLOSED: [2017-11-14 Tue 12:11] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts_micro_instances.html The t1.micro is a previous generation instance and it has been replaced by the t2.micro ** DONE cloudformation Outputs: Value vs Export CLOSED: [2017-11-14 Tue 12:33] The optional Outputs section declares output values that you can import into other stacks

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/outputs-section-structure.html

https://github.com/awslabs/startup-kit-templates/blob/master/db.cfn.yml#L124-L130

#+BEGIN_EXAMPLE Outputs: Logical ID: Description: Information about the value Value: Value to return Export: Name: Value to export #+END_EXAMPLE

  • Sample #+BEGIN_EXAMPLE Outputs:

    RdsDbId: Description: RDS Database ID Value: !Ref Database Export: Name: !Sub "${AWS::StackName}-DatabaseID" #+END_EXAMPLE ** TODO [#A] cloudformation Resources *** AWS::EC2::Instance https://github.com/awslabs/startup-kit-templates/blob/master/bastion.cfn.yml #+BEGIN_EXAMPLE BastionHost: Type: AWS::EC2::Instance Properties: InstanceType: t2.micro KeyName: !Ref KeyName ImageId: !FindInMap [AMIMap, !Ref "AWS::Region", AMI] SubnetId: !ImportValue "Fn::Sub": "${NetworkStackName}-PublicSubnet1ID" SecurityGroupIds: - !ImportValue "Fn::Sub": "${NetworkStackName}-BastionGroupID" Tags: - Key: Name Value: startup-kit-bastion #+END_EXAMPLE *** AWS::EC2::EIP https://github.com/awslabs/startup-kit-templates/blob/master/bastion.cfn.yml #+BEGIN_EXAMPLE BastionEIP: Type: AWS::EC2::EIP Properties: InstanceId: !Ref BastionHost Domain: vpc #+END_EXAMPLE *** # --8<-------------------------- separator ------------------------>8-- :noexport: *** AWS::IAM::InstanceProfile https://github.com/awslabs/startup-kit-templates/blob/07aa16757de351b26d299756e6950ca02b4afa9a/app.cfn.yml #+BEGIN_EXAMPLE AppInstanceProfile: Type: AWS::IAM::InstanceProfile Properties: Path: / Roles: - !Ref AppRole #+END_EXAMPLE *** AWS::IAM::Role https://github.com/awslabs/startup-kit-templates/blob/07aa16757de351b26d299756e6950ca02b4afa9a/app.cfn.yml #+BEGIN_EXAMPLE ElasticBeanstalkServiceRole: Type: AWS::IAM::Role Properties: Path: / AssumeRolePolicyDocument: | { "Statement": [{ "Effect": "Allow", "Principal": { "Service": [ "elasticbeanstalk.amazonaws.com" ]}, "Action": [ "sts:AssumeRole" ] }] } ManagedPolicyArns: - arn:aws:iam::aws:policy/service-role/AWSElasticBeanstalkEnhancedHealth - arn:aws:iam::aws:policy/service-role/AWSElasticBeanstalkService #+END_EXAMPLE

https://github.com/awslabs/startup-kit-templates/blob/07aa16757de351b26d299756e6950ca02b4afa9a/app.cfn.yml #+BEGIN_EXAMPLE

IAM resources

AppRole: Type: AWS::IAM::Role Properties: Path: / AssumeRolePolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: Service: - ec2.amazonaws.com Action: - sts:AssumeRole #+END_EXAMPLE *** AWS::IAM::Policy https://github.com/awslabs/startup-kit-templates/blob/07aa16757de351b26d299756e6950ca02b4afa9a/app.cfn.yml #+BEGIN_EXAMPLE AppPolicies: Type: AWS::IAM::Policy Properties: PolicyName: App Roles: - !Ref AppRole PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Action: "" Resource: "" #+END_EXAMPLE *** # --8<-------------------------- separator ------------------------>8-- :noexport: *** AWS::ElasticBeanstalk::Application https://github.com/awslabs/startup-kit-templates/blob/07aa16757de351b26d299756e6950ca02b4afa9a/app.cfn.yml #+BEGIN_EXAMPLE Application: Type: AWS::ElasticBeanstalk::Application Properties: ApplicationName: !Ref ApplicationName #+END_EXAMPLE *** AWS::ElasticBeanstalk::ApplicationVersion https://github.com/awslabs/startup-kit-templates/blob/07aa16757de351b26d299756e6950ca02b4afa9a/app.cfn.yml #+BEGIN_EXAMPLE ApplicationVersion: Type: AWS::ElasticBeanstalk::ApplicationVersion Properties: ApplicationName: !Ref Application SourceBundle: S3Bucket: !Ref AppS3Bucket S3Key: !Ref AppS3Key #+END_EXAMPLE *** AWS::ElasticBeanstalk::Environment https://github.com/awslabs/startup-kit-templates/blob/07aa16757de351b26d299756e6950ca02b4afa9a/app.cfn.yml #+BEGIN_EXAMPLE Environment: Type: AWS::ElasticBeanstalk::Environment Properties: EnvironmentName: !Sub "${ApplicationName}-${EnvironmentName}" ApplicationName: !Ref Application TemplateName: !Ref ConfigurationTemplate VersionLabel: !Ref ApplicationVersion DependsOn: - ConfigurationTemplate - ApplicationVersion #+END_EXAMPLE *** AWS::ElasticBeanstalk::ConfigurationTemplate https://github.com/awslabs/startup-kit-templates/blob/07aa16757de351b26d299756e6950ca02b4afa9a/app.cfn.yml #+BEGIN_EXAMPLE

The configuration template contains environment parameters such as those

that relate to the autoscaling group (e.g. size, triggers), placement of

resources in the VPC, load balancer setup, and environment variables

ConfigurationTemplate: Type: AWS::ElasticBeanstalk::ConfigurationTemplate Properties: ApplicationName: !Ref Application SolutionStackName: !FindInMap [ StackMap, !Ref StackType, stackName ] OptionSettings:

  - Namespace: aws:elasticbeanstalk:environment
    OptionName: EnvironmentType
    Value: LoadBalanced

  - Namespace: aws:elasticbeanstalk:environment
    OptionName: LoadBalancerType
    Value: application

  - Namespace: aws:elasticbeanstalk:environment
    OptionName: ServiceRole
    Value: !Ref ElasticBeanstalkServiceRole

#+END_EXAMPLE ** DONE [#A] cloudformation download github folder CLOSED: [2017-11-16 Thu 00:44] http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html sources: /home/ec2-user/chef/aws-jenkins-study: 'https://github.com/DennyZhang/aws-jenkins-study/tarball/master' owner: ec2-user group: ec2-user ** DONE cloudformation sources/files CLOSED: [2017-11-16 Thu 11:24]

  • files: A list of files. If cfn-init changes one directly via the files block, this service will be restarted
  • sources: A list of directories. If cfn-init expands an archive into one of these directories, this service will be restarted.

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html ** DONE EC2 cloudformation install chefdk CLOSED: [2017-11-16 Thu 11:38] https://learn.chef.io/modules/learn-the-basics/rhel/aws/set-up-a-machine-to-manage#/ ** DONE cf commands use which os user to run the command: root? CLOSED: [2017-11-16 Thu 15:49] ** HALF cf sources can't specify username and group In Cloudformation, sources can't specify user and group?

I was hoping something like this.

          sources:
              /home/ec2-user/chef/aws-jenkins-study: 'https://github.com/DennyZhang/aws-jenkins-study/tarball/master'
              owner: ec2-user
              group: ec2-user

But in cfn-init.log I get this error

2017-11-16 17:27:42,347 [ERROR] Error encountered during build of setup_jenkins: ec2-user does not exist
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/cfnbootstrap/construction.py", line 542, in run_config
    CloudFormationCarpenter(config, self._auth_config).build(worklog)
  File "/usr/lib/python2.7/dist-packages/cfnbootstrap/construction.py", line 246, in build
    changes['sources'] = SourcesTool().apply(self._config.sources, self._auth_config)
  File "/usr/lib/python2.7/dist-packages/cfnbootstrap/sources_tool.py", line 69, in apply
    raise ToolError("%s does not exist" % archive)
ToolError: ec2-user does not exist

** TODO cloudformation download and install rpm

  • [#A] Amazon S3: Scalable Storage in the Cloud :noexport:IMPORTANT: Limiation: | Name | Comment | |-----------------+-------------------------------------------------------| | bucket count | Each AWS account can own up to 100 buckets at a time. | | Object size | can range from 1 byte to 5 TB | | S3 availability | 99.9% | | S3 durability | 99.999999999% | |-----------------+-------------------------------------------------------| | RSS durability | 99.99% |
  • Amazon S3 is based on an "eventual consistent" model

  • The number of object you can store is unlimited

  • By default, your Amazon S3 buckets and objects are private.

  • The bucket name you choose must be unique across all existing bucket names in Amazon S3.

  • After you create a bucket, you cannot change its name.

  • Bucket ownership is not transferable. ** TODO [#A] Four mechanisms to controll S3 resource access :IMPORTANT: Customers may use four mechanisms for controlling access to Amazon S3 resources: | Name | Summary | |-----------------------------+---------| | IAM Policies | | | bucket policies | | | Access Control Lists (ACLs) | | | query string authentication | |

  • Amazon S3 offers access policy options broadly categorized as resource-based policies and user policies.

http://aws.amazon.com/s3/faqs/

IAM enables organizations with multiple employees to create and manage multiple users under a single AWS account. With IAM policies, companies can grant IAM users fine-grained control to their Amazon S3 bucket or objects while also retaining full control over everything the users do.

With bucket policies, companies can define rules which apply broadly across all requests to their Amazon S3 resources, such as granting write privileges to a subset of Amazon S3 resources. Customers can also restrict access based on an aspect of the request, such as HTTP referrer and IP address.

With ACLs, customers can grant specific permissions (i.e. READ, WRITE, FULL_CONTROL) to specific users for an individual bucket or object.

With query string authentication, customers can create a URL to an Amazon S3 object which is only valid for a limited time.

By default, all Amazon S3 resources-buckets, objects, and related subresources (for example, lifecycle configuration and website configuration)-are private: only the resource owner, an AWS account that created it, can access the resource. *** DONE S3 data security question CLOSED: [2015-05-09 Sat 12:11] Which features can be used to restrict access to data in S3? (Pick 2 correct answers) A. Create a CloudFront distribution for the bucket. B. Set an S3 bucket policy. C. Use S3 Virtual Hosting. D. Set an S3 ACL on the bucket or the object. E. Enable IAM Identity Federation.

BD *** DONE S3 bucket ACL question CLOSED: [2015-05-09 Sat 12:12] You are signed in as root user on your account but there is an Amazon S3 bucket under your account that you cannot access. What is a possible reason for this?

The S3 bucket is full. The S3 bucket has reached the maximum number of objects allowed. You are in the wrong availability zone An IAM user assigned a bucket policy to an Amazon S3 bucket and didn't specify the root user as a principal

D With IAM, you can centrally manage users, security credentials such as access keys, and permissions that control which AWS resources users can access. In some cases, you might have an IAM user with full access to IAM and Amazon S3. If the IAM user assigns a bucket policy to an Amazon S3 bucket and doesn't specify the root user as a principal, the root user is denied access to that bucket. However, as the root user, you can still access the bucket by modifying the bucket policy to allow root user access.

http://docs.aws.amazon.com/IAM/latest/UserGuide/iam-troubleshooting.html#testing2 ** TODO [#A] S3 server side encryption question A client of yours has a huge amount of data stored on Amazon S3 but is concerned of someone stealing it while it is in transit. You know that all data is encrypted in transit on AWS but which of the following is wrong when describing server-side encryption on AWS?

Amazon S3 encrypts each object with a unique key. Amazon S3 server-side encryption employs strong multi-factor encryption. In server-side encryption, you manage encryption/decryption of your data, the encryption keys, and related tools. Server-side encryption is about data encryption at rest-that is, Amazon S3 encrypts your data as it writes it to disks.

C

Amazon S3 encrypts your object before saving it on disks in its data centers and decrypts it when you download the objects. You have two options depending on how you choose to manage the encryption keys: Server-side encryption and client-side encryption. Server-side encryption is about data encryption at rest-that is, Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts it for you when you access it. As long as you authenticate your request and you have access permissions, there is no difference in the way you access encrypted or unencrypted objects. Amazon S3 manages encryption and decryption for you. For example, if you share your objects using a pre-signed URL, that URL works the same way for both encrypted and unencrypted objects. In client-side encryption, you manage encryption/decryption of your data, the encryption keys, and related tools. Server-side encryption is an alternative to client-side encryption in which Amazon S3 manages the encryption of your data freeing you from the tasks of managing encryption and encryption keys. Amazon S3 server-side encryption employs strong multi-factor encryption. Amazon S3 encrypts each object with a unique key. As an additional safeguard, it encrypts the key itself with a master key that it regularly rotates. Amazon S3 server-side encryption uses one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256), to encrypt your data. http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html ** TODO Does S3 supports PIOPS? ** TODO [#A] S3 grantee support authenticated users: how to specify what users? ** HALF [#B] S3 difference regions

  • url bucket name: http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro If you create a client by specifying the US Standard region, it uses the following endpoint to communicate with Amazon S3. s3.amazonaws.com

If you create a client by specifying any other AWS region, each of these regions maps to the region-specific endpoint: s3-.amazonaws.com

  • consistency module US Standard: Provides eventual consistency for all requests. http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html

Other regions: Provides read-after-write consistency for PUTS of new objects in your Amazon S3 bucket and eventual consistency for overwrite PUTS and DELETES. ** # --8<-------------------------- separator ------------------------>8-- ** DONE [#A] S3 consistency model: US standard region is exceptional, others are eventually consistency + read-after-write CLOSED: [2015-05-04 Mon 13:09] http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html

  • The US Standard region provides eventual consistency for all requests.

  • All other regions provide read-after-write consistency for PUTS of new objects and eventual consistency for overwrite PUTS and DELETES.

  • Amazon S3 does not currently support object locking. *** question: eventually consistency for existing objects A user is updating an object in the Singapore region. The original content has the value "colour=red". The user updates the object with the content as "colour=white". If the user tries to read the value of the object 1 minute after it was uploaded, what will S3 return?

It will return an error saying that the object was not found It may return either "colour=red" or "colour=white" i.e. any of the value It will return "colour=white" It will return "colour=red" B The AWS S3 Singapore region supports read-after-write consistency for PUTS of new objects in the Amazon S3 bucket and eventual consistency for overwrite PUTS and DELETES. In this case since it is overwrites, it may return the old or the new object. *** question: read-after-write for new objects http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html

A user is creating an object in the EU region. The content of the object is the value "colour=red". If the user tries to read the value of the object 1 minute after it was uploaded, what will S3 return?

It will return "colour=red" It will return error 404 object not found It will return an invalid key It may return an error or return the value "colour=red"

A The AWS S3 EU region supports read-after-write consistency for PUTS of new objects in the Amazon S3 bucket and eventual consistency for overwrite PUTS and DELETES. In this case since it is a new request, the object will be available for the user to view.

http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html ** DONE S3 doens't support object locking CLOSED: [2015-05-04 Mon 13:18] http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html

  • Amazon S3 does not currently support object locking.

  • If two PUT requests are simultaneously made to the same key, the request with the latest time stamp wins If this is an issue, you will need to build an object-locking mechanism into your application.

  • Updates are key-based; there is no way to make atomic updates across keys. For example, you cannot make the update of one key dependent on the update of another key unless you design this functionality into your application. ** DONE [#A] Feature read-after-write consistency: it is only for new data. CLOSED: [2015-04-08 Wed 18:36] http://shlomoswidler.com/2009/12/read-after-write-consistency-in-amazon.html

Read-after-write consistency tightens things up a bit, guaranteeing immediate visibility of new data to all clients. With read-after-write consistency, a newly created object or file or table row will immediately be visible, without any delays.

Note that read-after-write is not complete consistency: there's also read-after-update and read-after-delete.

Read-after-write consistency allows you to build distributed systems with less latency. ** DONE [#A] S3 data encryption :IMPORTANT: CLOSED: [2015-05-04 Mon 11:40] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingEncryption.html

  • You can securely upload/download your data to Amazon S3 via SSL endpoints using the HTTPS protocol.

  • If you need extra security you can use the Server Side Encryption (SSE) option or the Server Side Encryption with Customer-Provide Keys (SSE-C) option to encrypt data stored-at-rest.

Protect data while in-transit (as it travels to and from Amazon S3): SSL, client-side encryption

  • Protect data at rest: Use Server-Side Encryption http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html You request Amazon S3 to encrypt your object before saving it on disks in its data centers and decrypt it when you download the objects.

| Options | Summary | |----------------------------------------------+----------------------------------------------------------------------------------------| | Encrypt with Amazon S3-Managed Keys (SSE-S3) | encrypt Each object with a unique key with strong multi-factor encryption. | | Encrypt with AWS KMS-Managed Keys (SSE-KMS) | Similar to SSE-S3, but with some additional benefits | | Encrypt with Customer-Provided Keys (SSE-C) | You manage encryption/decryption of your data, the encryption keys, and related tools. | *** Use Client-Side Encryption You can encrypt data client-side and upload the encrypted data to Amazon S3. In this case, you manage the encryption process, the encryption keys, and related tools. ** DONE [#B] Feature: Hosting a Static Website on Amazon S3 CLOSED: [2015-04-06 Mon 16:31] https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html https://docs.aws.amazon.com/AmazonS3/latest/dev/HostingWebsiteOnS3Setup.html *** bucket ACL { "Version": "2012-10-17", "Statement": [ { "Sid": "PublicReadForGetBucketObjects", "Effect": "Allow", "Principal": "", "Action": ["s3:GetObject"], "Resource": "arn:aws:s3:::denny-s3-bucket/" }, { "Sid": "AWSCloudTrailAclCheck20131101", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::903692715234:root", "arn:aws:iam::035351147821:root", "arn:aws:iam::859597730677:root", "arn:aws:iam::814480443879:root", "arn:aws:iam::216624486486:root", "arn:aws:iam::086441151436:root", "arn:aws:iam::388731089494:root", "arn:aws:iam::284668455005:root", "arn:aws:iam::113285607260:root" ] }, "Action": "s3:GetBucketAcl", "Resource": "arn:aws:s3:::denny-s3-bucket" }, { "Sid": "AWSCloudTrailWrite20131101", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::903692715234:root", "arn:aws:iam::035351147821:root", "arn:aws:iam::859597730677:root", "arn:aws:iam::814480443879:root", "arn:aws:iam::216624486486:root", "arn:aws:iam::086441151436:root", "arn:aws:iam::388731089494:root", "arn:aws:iam::284668455005:root", "arn:aws:iam::113285607260:root" ] }, "Action": "s3:PutObject", "Resource": "arn:aws:s3:::denny-s3-bucket/AWSLogs/309775215851/*", "Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } } } ] } *** upload index.html and error.html to S3 bucket

This is error page *** Configure Bucket: Enable website hosting *** visit http://example-bucket.s3-website-region.amazonaws.com

curl http://denny-s3-bucket.s3-website-ap-southeast-1.amazonaws.com/index.html

curl -I http://denny-s3-bucket.s3-website-ap-southeast-1.amazonaws.com/AWSLogs/309775215851/CloudTrail/ap-southeast-1/2015/04/02/309775215851_CloudTrail_ap-southeast-1_20150402T1720Z_XgJSsWxhxj3sHMC5.json.gz ** DONE [#A] Feature: S3 Object Lifecycle Management :IMPORTANT: CLOSED: [2015-04-15 Wed 13:12] http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html Many objects that you store in an Amazon S3 bucket might have a well-defined lifecycle, for example:

  • If you are uploading periodic logs to your bucket, your application might need these logs for a week or a month after creation, and after that you might want to delete them.

  • Some documents are frequently accessed for a limited period of time. After that, you might not need real-time access to these objects, but your organization might require you to archive them for a longer period and then optionally delete them later. Some types of data that you might upload to Amazon S3 primarily for archival purposes include digital media archives, financial and healthcare records, raw genomics sequence data, long-term database backups, and data that must be retained for regulatory compliance.

You can add lifecycle configuration to nonversioned buckets and versioning-enabled buckets.

  • The Transition and Expiration lifecycle actions enable you to manage the lifecycle of current versions of your objects.

  • The NonCurrentVersionTransition and NonCurrentVersionExpiration actions enable you to manage the lifecycle of noncurrent (previous) versions of your objects.

You cannot access the archived objects through the Amazon Glacier console or the API. ** DONE [#A] How do I know if I lose an RRS object: Return 405 error CLOSED: [2015-04-08 Wed 21:39] http://aws.amazon.com/s3/faqs/ If an RRS object has been lost, Amazon S3 will return a 405 error on requests made to that object.

Amazon S3 also offers notifications for Reduced Redundancy Storage (RRS) object loss. *** DONE RRS lost data repsonse: returns 405 not 404 CLOSED: [2015-05-04 Mon 11:41] A user has stored an object in RRS. The object is lost due to an internal AWS failure. What will AWS return when someone queries the object?

404 Object Not Found error AWS will serve the object from backup The object cannot be lost as RRS is highly durable 405 Method Not Allowed error D

If an object in reduced redundancy storage has been lost, Amazon S3 will return a 405 error on requests made to that object. http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingRRS.html ** DONE [#A] How Amazon S3 Authorizes a Request for a Bucket Operation :IMPORTANT: CLOSED: [2015-05-04 Mon 12:02] http://docs.aws.amazon.com/AmazonS3/latest/dev/access-control-auth-workflow-bucket-operation.html [[file:/Users/mac/Dropbox/private_data/emacs_stuff/images/AccessControlAuthorizationFlowBucketResource.png]]

http://docs.aws.amazon.com/AmazonS3/latest/dev/how-s3-evaluates-access-control.html When Amazon S3 receives a request-for example, a bucket or an object operation-it first verifies that the requester has the necessary permissions. Amazon S3 evaluates all the relevant access policies, user policies, and resource-based policies (bucket policy, bucket ACL, object ACL) in deciding whether to authorize the request.

e.g

  • If the requester is an IAM user Amazon S3 must determine if the parent AWS account to which the user belongs has granted the user necessary permission to perform the operation. In addition, if the request is for a bucket operation, such as a request to list the bucket content, Amazon S3 must verify that the bucket owner has granted permission for the requester to perform the operation.

    To perform a specific operation on a resource, an IAM user needs permission from both the parent AWS account to which it belongs and the AWS account that owns the resource.

  • If the request is for an operation on an object that the bucket owner does not own

In order to determine whether the requester has permission to perform the specific operation, Amazon S3 does the following, in order, when it receives a request:

  1. Converts all the relevant access policies (user policy, bucket policy, ACLs) at run time into a set of policies for evaluation.

  2. Evaluates the resulting set of policies in the following steps. In each step, Amazon S3 evaluates a subset of policies in a specific context, based on the context authority.

    User context, Bucket context and Object context ** DONE [#A] Key Differences Between the Amazon Website and the REST API Endpoint CLOSED: [2015-05-03 Sun 23:07] When you configure a bucket for website hosting, the website is available via the region-specific website endpoint.

http://example-bucket.s3-website-us-east-1.amazonaws.com/

http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteEndpoints.html

  • Website endpoints doesn't support https

| Key Difference | REST API Endpoint | Website Endpoint | |-------------------------------------------------+--------------------------------------------------+----------------------------------------------------------------------------| | Access control | Supports both public and private content. | Supports only publicly readable content. | | Error message handling | Returns an XML-formatted error response. | Returns an HTML document. | | Redirection support | Not applicable | Supports both object-level and bucket-level redirects. | | Requests supported | Supports all bucket and object operations | Supports only GET and HEAD requests on objects. | | Responses to GET/HEAD requests at bucket's root | Returns a list of the object keys in the bucket. | Returns the index document that is specified in the website configuration. | | Secure Sockets Layer (SSL) support | Supports SSL connections. | Does not support SSL connections. |

A user has created a bucket and is trying to access the object using the public URL of the object. Which of the below mentioned statements is false for accessing the object using the REST API endpoint?

It supports the SSL connection It returns the response in an XML format It supports the redirect request It supports all the object and bucket functions with REST

C

There is a difference between the S3 REST API end point and the S3 website hosting enabled end point: the REST API end point does not support redirect requests.

http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteEndpoints.html ** # --8<-------------------------- separator ------------------------>8-- ** Change object class of S3 http://docs.aws.amazon.com/AmazonS3/latest/dev/ChgStoClsOfObj.html

  • It is not possible to change the storage class of a specific version of an object. When you copy it, Amazon S3 gives it a new version ID. ** Feature: RRS: Reduced Redundancy Storage http://aws.amazon.com/about-aws/whats-new/2010/05/19/announcing-amazon-s3-reduced-redundancy-storage/ http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingRRS.html http://s3browser.com/working-with-reduced-redundancy-storage-rrs.php

Amazon S3 standard storage is designed to provide 99.999999999% durability and to sustain the concurrent loss of data in two facilities, while RRS is designed to provide 99.99% durability and to sustain the loss of data in a single facility.

If an object in reduced redundancy storage has been lost, Amazon S3 will return a 405 error on requests made to that object.

Amazon S3 also offers notifications for reduced redundancy storage object loss: you can configure your bucket so that when Amazon S3 detects the loss of an RRS object. ** S3 Common Use Scenarios https://docs.aws.amazon.com/AmazonS3/latest/gsg/S3-gsg-CommonUseScenarios.html The AWS Solutions web page lists many of the ways you can use Amazon S3. The following list summarizes some of those ways.

  • Backup and Storage - Provide data backup and storage services for others.
  • Application Hosting - Provide services that deploy, install, and manage web applications.
  • Media Hosting - Build a redundant, scalable, and highly available infrastructure that hosts video, photo, or music uploads and downloads.
  • Software Delivery - Host your software applications that customers can download. ** # --8<-------------------------- separator ------------------------>8-- ** basic use #+BEGIN_EXAMPLE

Object存储在bucket中, 而bucket位于某个Region中.

Objects的多个replica也是存储在同一个region的.

amazon认为: 如果系统有超过5%的请求, 没有在预定的time window内返回, 那么 就认为系统故障, 需要运维人员关注

从实际运行效果来看, 对S3数据的修改一般在2秒钟内, 即可达到最终一致性.

S3提供99.999999999%可靠性和99.99%可用性

S3支持versioning的多版本存储.

通过RRS(Reduced Redundancy Storage)的思路, 实现越重要数据, 存的备份数越多. #+END_EXAMPLE ** useful link http://aws.amazon.com/s3/\

Amazon Simple Storage Service (Amazon S3)

http://code.google.com/p/s3ql/wiki/implementation_details\

S3QL Implementation Details ** [#A] Todo: 疑问 :IMPORTANT: *** S3通过bucket实现用户隔离. 那么存储多份的位置选取, 是以bucket为单位, 还是为Object为单位呢? 个人感觉是以bucket为单位. 这样看来, 如果一个磁盘坏了, 那么它对应 bucket的用户的所有数据受到的风险都是一样的. *** query string authentication具体场景是什么? 它是不是可以在访问层做, 而在IaaS层先不需要做. *** Nosql db为什么没有借鉴bucket的方法, 将不同group的数据单独存储, 而不是像现在这样全放在一个库? DB库坏掉, 大量用户数据受到影响? *** Amazon S3 PUT and COPY operations synchronously, 这是真的吗? http://aws.amazon.com/s3/\ ** Use cases http://aws.amazon.com/s3/\

  • Content Storage and Distribution

  • Storage for Data Analysis

  • Backup, Archiving and Disaster Recovery ** 盛大云存储 -- 与Amazon S3相似 :noexport: *** node/disk的故障, 是由人工来确认 *** 当确认disk故障时, 数据通过多写的方式能在5分钟内增加一个备份 *** [#A] 淘宝的fastfs文件系统: 存储对象由外部ID转化为内部ID时, 辟开了第三条道(既不用维护数据路由表, 也不用consistent hashing) 优点: 通过三份相同mirror的方式进行同步备份, 避免了一个复杂的问题: 维护 数据路由表的可用性

缺点: 它的优点也是它的缺点. 只避免了数据的单点, 但没有实现存储资料的虚 拟化; 磁盘维护时, 只能数据对拷, 时间十分久. *** 两个磁盘对拷1TB数据, 需要半天左右的时间: 因为磁盘I/O最高为50MB/s, 所以磁盘故障恢复时, 数据是不能恢复到新的磁盘中的 ** # --8<-------------------------- separator ------------------------>8-- ** DONE S3 data security CLOSED: [2015-04-08 Wed 17:18] http://awstrainingandcertification.s3.amazonaws.com/production/AWS_certified_sysops_associate_examsample.pdf Which features can be used to restrict access to data in S3? (Pick 2 correct answers) A. Create a CloudFront distribution for the bucket. B. Set an S3 bucket policy. C. Use S3 Virtual Hosting. D. Set an S3 ACL on the bucket or the object. E. Enable IAM Identity Federation.

B and D ** DONE What data consistency model does Amazon S3 employ? CLOSED: [2015-04-08 Wed 18:35] http://aws.amazon.com/s3/faqs/

Amazon S3 buckets in the US Standard region provide eventual consistency. Amazon S3 buckets in all other regions provide read-after-write consistency for PUTS of new objects and eventual consistency for overwrite PUTS and DELETES. ** DONE What options do I have for encrypting data stored on Amazon S3? CLOSED: [2015-04-08 Wed 21:20] Four ways | Name | Summary | |------------------+-------------------------------------------------------------------------------------------| | SSE-S3 | Amazon handles key management and key protection | | SSE-C | users manage the keys themselves | | SSE-KMS | enables users to use AWS Key Management Service (AWS KMS) to manage your encryption keys. | | a client library | such as the Amazon S3 Encryption Client | ** DONE Feature: Amazon S3 Cross-Region Replication (CRR) CLOSED: [2015-04-08 Wed 21:38] http://aws.amazon.com/s3/faqs/ CRR is an Amazon S3 feature that automatically replicates data across AWS regions.

With CRR, every object uploaded to an S3 bucket is automatically replicated to a destination bucket in a different AWS region that you choose.

You can use CRR to provide lower-latency data access in different geographic regions. ** DONE S3 automated archival using the lower-cost Amazon Glacier: Singapore Region currently doesn't support CLOSED: [2015-04-09 Thu 09:41] http://docs.aws.amazon.com/AmazonS3/latest/UG/lifecycle-configuration-bucket-no-versioning.html ** DONE [#B] Why need "multipart upload" of S3 CLOSED: [2015-04-21 Tue 10:51] http://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html Using multipart upload provides the following advantages:

With a single PUT operation you can upload objects up to 5 GB in size. Using the Multipart upload API you can upload large objects, up to 5 TB.

  • Improved throughput-You can upload parts in parallel to improve throughput.
  • Quick recovery from any network issues-Smaller part size minimizes the impact of restarting a failed upload due to a network error.
  • Pause and resume object uploads-You can upload object parts over time. Once you initiate a multipart upload there is no expiry; you must explicitly complete or abort the multipart upload.
  • Begin an upload before you know the final object size-You can upload an object as you are creating it. ** DONE S3 and file system question CLOSED: [2015-04-29 Wed 18:07] #+BEGIN_EXAMPLE Object storage systems require less _____ than file systems to store and access files. Big data Metadata Master data Exif data

metadata Object storage systems are typically more efficient because they reduce the overhead of managing file metadata by storing the metadata with the object. This means object storage can be scaled out almost endlessly by adding nodes. #+END_EXAMPLE ** DONE [#B] GET Bucket (List Objects): returns some or all (up to 1000) of the objects in a bucket. CLOSED: [2015-04-30 Thu 15:02] https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html https://cloudnative.io/blog/2015/01/aws-s3-performance-tuning/ One thing to know about LIST operation is that it's expensive and heavy.

Consequently, frequent LIST requests are not recommended. ** # --8<-------------------------- separator ------------------------>8-- ** DONE S3 data security question CLOSED: [2015-05-04 Mon 13:22] http://awstrainingandcertification.s3.amazonaws.com/production/AWS_certified_solutions_architect_associate_examsample.pdf To protect S3 data from both accidental deletion and accidental overwriting, you should: A. enable S3 versioning on the bucket B. access S3 data using only signed URLs C. disable S3 delete using an IAM bucket policy D. enable S3 Reduced Redundancy Storage E. enable Multi-Factor Authentication (MFA) protected access

A ** DONE S3 BitTorrent: Objects should be less than 5 GB in size CLOSED: [2015-05-05 Tue 10:21] http://docs.aws.amazon.com/AmazonS3/latest/dev/S3Torrent.html A friend wants you to set up a small BitTorrent storage area for him on Amazon S3. You tell him it is highly unlikely that AWS would allow such a thing in their infrastructure. However you decide to investigate. Which of the following statements best describes using BitTorrent with Amazon S3?

You can use the BitTorrent protocol but only for objects that are less than 100 GB in size. Amazon S3 does not support the BitTorrent protocol because it is used for pirated software. You can use the BitTorrent protocol but only for objects that are less than 5 GB in size. You can use the BitTorrent protocol but you need to ask AWS for specific permissions first.

C ** HALF S3 question AWS Certified Solutions Architect Associate Practice Exam

A startup company hired you to help them build a mobile application, that will ultimately store billions of images and videos in Amazon Simple Storage Service (S3). The company is lean on funding, and wants to minimize operational costs, however, they have an aggressive marketing plan, and expect to double their current installation base every six months. Due to the nature of their business, they are expecting sudden and large increases in traffic to and from S3, and need to ensure that it can handle the performance needs of their application.

 What other information must you gather from this customer in order to determine whether S3 is the right option?

A. You must know how many customers the company has today, because this is critical in understanding what their customer base will be in two years. B. You must find out the total number of requests per second at peak usage. C. You must know the size of the individual objects being written to S3, in order to properly design the key namespace. D. In order to build the key namespace correctly, you must understand the total amount of storage needs for each S3 bucket. Mark this item for later review. B? ** DONE It is not possible to change the storage class of a specific version of an object. CLOSED: [2015-05-09 Sat 12:07] A user has enabled versioning of a bucket. Is it possible to store all the versions of the objects in RRS and the current version of the object in S3?

Yes No, but it is possible that the current version stays in RRS and the older version in S3 Yes, only if the life cycle rule is set No

D It is not possible to change the storage class of a specific version of an object in Amazon S3. When the user copies it, Amazon S3 gives it a new version ID.

http://docs.aws.amazon.com/AmazonS3/latest/dev/ChgStoClsOfObj.html ** HALF S3 IAM verify question An IAM user is trying to perform an action on an object belonging to some other root account's bucket. Which of the below mentioned options will AWS S3 not verify?

Permission provided by the parent of the IAM user on the bucket Permission provided by the bucket owner to the IAM user The object owner has provided access to the IAM user Permission provided by the parent of the IAM user

A

If the IAM user is trying to perform some action on the object belonging to another AWS user's bucket, S3 will verify whether the owner of the IAM user has given sufficient permission to him. It also verifies the policy for the bucket as well as the policy defined by the object owner. http://docs.aws.amazon.com/AmazonS3/latest/dev/access-control-auth-workflow-object-operation.html ** HALF Can Amazon S3 uploads resume on failure Can Amazon S3 uploads resume on failure or do they need to restart? A Restart from beginning B You can resume them, if you flag the "resume on failure" option before uploading. C Resume on failure D Depends on the file size http://surajbatuwana.blogspot.com.au/p/aws-certification-sample-questions.html ** TODO Does S3 have api to get multiple objects? ** DONE Amazon S3 bucket names are globally unique, regardless of the AWS region in which you create the bucket. CLOSED: [2015-05-09 Sat 13:12] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro ** DONE [#A] How Amazon S3 Authorizes a Request: User context, Bucket context, Object context CLOSED: [2015-05-09 Sat 13:24] http://docs.aws.amazon.com/AmazonS3/latest/dev/how-s3-evaluates-access-control.html | Name | Summary | |----------------+--------------------------------------------------------------------------------------------------------| | User context | In the user context, the parent account to which the user belongs is the context authority. | | Bucket context | In the bucket context, Amazon S3 evaluates policies owned by the AWS account that owns the bucket. | | Object context | If the request is for an object, Amazon S3 evaluates the subset of policies owned by the object owner. | ** TODO [#A] S3 ACL question :IMPORTANT: A root AWS account owner is trying to understand various options to set the permission to AWS S3. Which of the below mentioned options is not the right option to grant permission for S3?

S3 ACL S3 Object Access Policy User Access Policy S3 Bucket Access Policy

B

Amazon S3 provides a set of operations to work with the Amazon S3 resources. Managing S3 resource access refers to granting others permissions to work with S3. There are three ways the root account owner can define access with S3: S3 ACL: The user can use ACLs to grant basic read/write permissions to other AWS accounts. S3 Bucket Policy: The policy is used to grant other AWS accounts or IAM users permissions for the bucket and the objects in it. User Access Policy: Define an IAM user and assign him the IAM policy which grants him access to S3. http://docs.aws.amazon.com/AmazonS3/latest/dev/access-control-overview.html ** TODO [#A] Is there any object policy in S3? ** S3 ACL and difference

  • Generally if user is defining the ACL on the bucket, the objects in the bucket do not inherit it and vice a versa?
  • Policy defined in bucket level applied to objects?
  • We can't define Object policy in the page of object? ** TODO S3 bucket ACL A root account owner has created an S3 bucket testmycloud. The account owner wants to allow everyone to upload the objects as well as enforce that the person who uploaded the object should manage the permission of those objects. Which is the easiest way to achieve this?

The root account owner should create the bucket policy which allows the other account owners to set the object policy of that bucket The root account owner should create a bucket policy which allows the IAM users to upload the object The root account should create the IAM users and provide them the permission to upload content to the bucket The root account should use ACL with the bucket to allow everyone to upload the object

D

Each AWS S3 bucket and object has an ACL (Access Control List) associated with it. An ACL is a list of grants identifying the grantee and the permission granted. The user can use ACLs to grant basic read/write permissions to other AWS accounts. ACLs use an Amazon S3-specific XML schema. The user cannot grant permissions to other users in his account. ACLs are suitable for specific scenarios. For example, if a bucket owner allows other AWS accounts to upload objects, permissions to these objects can only be managed using the object ACL by the AWS account that owns the object. http://docs.aws.amazon.com/AmazonS3/latest/dev/S3_ACLs_UsingACLs.html ** TODO S3 bucket policy to manage ACL at data level, instead of whole bucket Amazon S3 allows you to set per-file permissions to grant read and/or write access. However you have decided that you want an entire bucket with 100 files already in it to be accessible to the public. You don't want to go through 100 files individually and set permissions. What would be the best way to do this?

Move the files to a new bucket. Move the bucket to a new region Use Amazon EBS instead of S3 Add a bucket policy to the bucket.

D Amazon S3 supports several mechanisms that give you flexibility to control who can access your data as well as how, when, and where they can access it. Amazon S3 provides four different access control mechanisms: AWS Identity and Access Management (IAM) policies, Access Control Lists (ACLs), bucket policies, and query string authentication. IAM enables organizations to create and manage multiple users under a single AWS account. With IAM policies, you can grant IAM users fine-grained control to your Amazon S3 bucket or objects. You can use ACLs to selectively add (grant) certain permissions on individual objects. Amazon S3 bucket policies can be used to add or deny permissions across some or all of the objects within a single bucket. With Query string authentication, you have the ability to share Amazon S3 objects through URLs that are valid for a specified period of time

http://aws.amazon.com/s3/details/#security ** TODO [#A] S3 folders can have only ACL and cannot have a policy. A system admin is managing buckets, objects and folders with AWS S3. Which of the below mentioned statements is true and should be taken in consideration by the sysadmin?

Both the object and bucket can have ACL but folders cannot have ACL The folders support only ACL Folders can have a policy Both the object and bucket can have an Access Policy but folder cannot have policy

B A sysadmin can grant permission to the S3 objects or the buckets to any user or make objects public using the bucket policy and user policy. Both use the JSON-based access policy language. Generally if user is defining the ACL on the bucket, the objects in the bucket do not inherit it and vice a versa. The bucket policy can be defined at the bucket level which allows the objects as well as the bucket to be public with a single policy applied to that bucket. It cannot be applied at the object level. The folders are similar to objects with no content. Thus, folders can have only ACL and cannot have a policy. http://docs.aws.amazon.com/AmazonS3/latest/dev/access-policy-language-overview.html

The bucket policy can be defined at the bucket level which allows the objects as well as the bucket to be public with a single policy applied to that bucket.

It cannot be applied at the object level. ** DONE [#A] Sync Git repo with S3 bucket CLOSED: [2017-10-16 Mon 09:18] https://kramerc.com/2013/10/23/deploying-to-s3-upon-git-push/ *** hello world :noexport: #+BEGIN_EXAMPLE HTTP Proxy server name:

New settings: Access Key: AKIAIJQNW2DJBW76C6DA Secret Key: 7fVxGRxlyTEqBtnqZdocwHQRfhvh6Am7S93G5ihY Default Region: US S3 Endpoint: denny-blog-images.s3-website-us-east-1.amazonaws.com DNS-style bucket+hostname:port template for accessing a bucket: denny-blog-images.s3.amazonaws.com Encryption password: XXX Path to GPG program: /opt/local/bin/gpg Use HTTPS protocol: True HTTP Proxy server name: HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] y #+END_EXAMPLE *** [#A] S3cmd usage: http://s3tools.org/usage denny-blog-images.s3.amazonaws.com

s3cmd --configure s3://denny-blog-images --region=us-east-1 --access_key=AKIAIJQNW2DJBW76C6DA --no-encrypt

s3cmd -P sync . s3://denny-blog-images

s3cmd --exclude=".git/*" sync . s3://denny-blog-images/

s3cmd put test.html s3://denny-blog-images/ *** sync folder via checksum https://stackoverflow.com/questions/21891045/exclude-folders-for-s3cmd-sync s3cmd --exclude=".git/" sync . s3://denny-blog-images/ *** upload file s3cmd put test.html s3://denny-blog-images/ *** s3cmd put: ignore files https://stackoverflow.com/questions/21891045/exclude-folders-for-s3cmd-sync s3cmd --exclude="//*" sync local/ s3://s3bucket ** DONE [#A] AWS CLI: S3 managment CLOSED: [2017-03-22 Wed 17:41] http://www.fizerkhan.com/blog/posts/Restrict-user-access-to-Single-S3-Bucket-using-Amazon-IAM.html

  • [#A] Amazon RDS: relational database in the cloud. :noexport: Amazon RDS: easily set up, operate, and scale a relational database in the cloud.

http://aws.amazon.com/rds/

Amazon RDS automatically:

  • patches the database software
  • backs up your database
  • storing the backups for a user-defined retention period and enabling point-in-time recovery.

Limitation: | Name | Comment | |-----------------------+----------------------------------------------------------| | Multi-AZ availability | 99.95% monthly up time SLA | | read replica count | For MySQL, PostgreSQL, up to 5 Reda Replicas | | DB storage capacity | Select from 5 GB to 3 TB of associated storage capacity. |

Charge factors: | Name | Summary | |------------------------+-------------------------------------------------------------------------| | Instance class | | | Running time | | | Storage | | | I/O requests per month | | | Backup storage | up to 100% of your provisioned database storage at no additional charge | | Data transfer | Internet data transfer in and out of your DB instance |

RDS Events http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.html | Source Type | Summary | |--------------------+---------| | DB instance | | | DB security group | | | DB snapshot | | | DB parameter group | | ** TODO [#A] RDS failover for multi AZ: how fast DNS take effect? http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html

  • Failover times are typically 60-120 seconds. However, large transactions or a lengthy recovery process can increase failover time. When the failover is complete, it can take additional time for the RDS console UI to reflect the new Availability Zone.

Due to how the Java DNS caching mechanism works, you may need to reconfigure your JVM environment. ** TODO [#A] Difference between RDS backup and snapshot http://stackoverflow.com/questions/5249842/how-does-amazon-rds-backup-snapshot-actually-work

  • Amazon RDS is using EBS as the backing store for its RDS databases.

  • Typically, a MySQL backup, in contrast to a snapshot, involves using a tool like mysqldump to create a file of SQL statements that will then reproduce the database. The database does not need to be frozen to do this. With an EBS backend, the best practice is to freeze the database (pause all transactions) while you are snapshotting to avoid data corruption. ** TODO [#A] When RDS take snapshot, how long the downtime would be? http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateSnapshot.html

  • Creating this backup on a Single-AZ DB instance results in a brief I/O suspension that typically lasting no more than a few minutes.

  • Multi-AZ DB instances are not affected by this I/O suspension since the backup is taken on the standby. ** TODO [#A] Why change backup retention period, will result in DB reboot? http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Troubleshooting.html#CHAP_Troubleshooting.Security http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html

An outage will occur if you change the backup retention period from 0 to a non-zero value or from a non-zero value to 0.

A DB instance reboot occurs immediately when one of the following occurs:

  • You change the backup retention period for a DB instance from 0 to a nonzero value or from a nonzero value to 0 and set Apply Immediately to true.
  • You change the DB instance class, and Apply Immediately is set to true.
  • You change storage type from standard to PIOPS, and Apply Immediately is set to true. ** TODO [#B] RDS: With Multi AZ feature, the user can not have option to take snapshot from replica. A user is using a small MySQL RDS DB. The user is experiencing high latency due to the Multi AZ feature. Which of the below mentioned options may not help the user in this situation?

Use a large or higher size instance Schedule the automated back up in non-working hours Take a snapshot from standby Replica Use PIOPS

C An RDS DB instance which has enabled Multi AZ deployments may experience increased write and commit latency compared to a Single AZ deployment, due to synchronous data replication. The user may also face changes in latency if deployment fails over to the standby replica. For production workloads, AWS recommends the user to use provisioned IOPS and DB instance classes (m1.large and larger) as they are optimized for provisioned IOPS to give a fast, and consistent performance. With Multi AZ feature, the user can not have option to take snapshot from replica.

http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html ** TODO [#A] Why Amazon RDS need 3 types of security groups DB security groups, VPC security groups, and EC2 security groups. http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html

http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.html

  • DB security group controls access to a DB instance that is not in a VPC
  • VPC security group controls access to a DB instance inside a VPC
  • Amazon EC2 security group controls access to an EC2 instance and can be used with a DB instance *** question You have just built an Amazon Relational Database Service(RDS) and you now need to set up a high level of security for this database as it is very confidential information so you need to find out all the possible options that are available for security. What security groups does Amazon RDS use?

VPC security groups only DB security groups, and EC2 security groups only VPC security groups, and EC2 security groups only DB security groups, VPC security groups, and EC2 security groups

D A security group controls the access to a DB instance. It does so by allowing access to IP address ranges or Amazon EC2 instances that you specify. Amazon RDS uses DB security groups, VPC security groups, and EC2 security groups. In simple terms, a DB security group controls access to a DB instance that is not in a VPC, a VPC security group controls access to a DB instance inside a VPC, and an Amazon EC2 security group controls access to an EC2 instance and can be used with a DB instance. http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html ** # --8<-------------------------- separator ------------------------>8-- ** DONE [#A] For RDS multi AZ, there can be only one standy replica: only one CLOSED: [2015-05-03 Sun 18:22] Select Yes to have Amazon RDS maintain a synchronous standby replica in a different Availability Zone than the DB instance. Amazon RDS will automatically fail over to the standby in the case of a planned or unplanned outage of the primary. ** DONE [#A] Amazon RDS Multi-AZ Deployments CLOSED: [2015-04-16 Thu 13:33] http://aws.amazon.com/rds/details/multi-az/

  • When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ).

  • In case of an infrastructure failure (for example, instance hardware failure, storage failure, or network disruption), Amazon RDS performs an automatic failover to the standby, so that you can resume database operations as soon as the failover is complete.

  • In the case of system upgrades like OS patching or DB Instance scaling, these operations are applied first on the standby, prior to the automatic failover.

  • DB instances using Multi-AZ deployments may have increased write and commit latency compared to a Single-AZ deployment, due to the synchronous data replication that occurs.

  • Multi-AZ deployments for Oracle, PostgreSQL, and MySQL DB instances use Amazon technology, while SQL Server DB instances use SQL Server Mirroring. ** DONE RDS multi AZ for different DB type CLOSED: [2015-05-03 Sun 13:33] You are running PostgreSQL on Amazon RDS and it seems to be all running smoothly deployed in one availability zone. A database administrator asks you if DB instances running PostgreSQL support Multi-AZ deployments. What would be a correct response to this question?

Yes but only for small db instances. Yes but you need to request the service from AWS. Yes. No.

C

Amazon RDS supports DB instances running several versions of PostgreSQL. Currently we support PostgreSQL versions 9.3.1, 9.3.2, and 9.3.3. You can create DB instances and DB snapshots, point-in-time restores and backups. DB instances running PostgreSQL support Multi-AZ deployments, Provisioned IOPS, and can be created inside a VPC. You can also use SSL to connect to a DB instance running PostgreSQL. You can use any standard SQL client application to run commands for the instance from your client computer. Such applications include pgAdmin, a popular Open Source administration and development tool for PostgreSQL, or psql, a command line utility that is part of a PostgreSQL installation. In order to deliver a managed service experience, Amazon RDS does not provide host access to DB instances, and it restricts access to certain system procedures and tables that require advanced privileges. Amazon RDS supports access to databases on a DB instance using any standard SQL client application. Amazon RDS does not allow direct host access to a DB instance via Telnet or Secure Shell (SSH). http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html ** DONE [#A] Amazon RDS provides backup storage up to 100% of the total provisioned database storage at no additional charge. CLOSED: [2015-05-04 Mon 13:52] A user has launched RDS with the Oracle DB. The instance size is 20 GB. The user has taken 2 snapshots of that DB. Will RDS charge the user for the snapshot?

No, provided the total snapshot size is less than 20 GB No. Backup storage is always free No, provided the snapshot storage is less than 40 GB Yes

ARDS backup storage is the storage that is associated with automated database backups and any active database snapshots that the user has taken. Amazon RDS provides backup storage up to 100% of the total provisioned database storage at no additional charge. In this case, RDS will allow free snapshot / automated backup up to 20 GB.

http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html ** DONE [#A] The multi AZ high-availability feature is not a scaling solution for read-only scenarios CLOSED: [2015-05-03 Sun 17:49] http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html

The high-availability feature is not a scaling solution for read-only scenarios; you cannot use a standby replica to serve read traffic. To service read-only traffic, you should use a Read Replica. ** DONE [#A] When RDS Snapshot, I/O operations will be suspended for a few minutes CLOSED: [2015-04-22 Wed 10:41] http://surajbatuwana.blogspot.com.au/p/aws-certification-sample-questions.html What happens to the I/O operations while you take a database snapshot? A I/O operations to the database are suspended for a few minutes while the backup is in progress. B I/O operations to the database are sent to a Replica (if available) for a few minutes while the backup is in progress. C I/O operations will be functioning normally D I/O operations to the database are suspended for an hour while the backup is in progress

A

http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.BackingUpAndRestoringAmazonRDSInstances.html

During the backup window, storage I/O may be suspended while your data is being backed up and you may experience elevated latency. ** DONE RDS standyby replica and readonly replica CLOSED: [2015-05-03 Sun 18:10] A user has enabled the Multi AZ feature with the MS SQL RDS database server. Which of the below mentioned statements will help the user understand the Multi AZ feature better?

In a Multi AZ, AWS runs two DBs in parallel and copies the data synchronously to the replica copy In a Multi AZ, AWS runs just one DB but copies the data synchronously to the standby replica AWS MS SQL does not support the Multi AZ feature In a Multi AZ, AWS runs two DBs in parallel and copies the data asynchronously to the replica copy

B Amazon RDS provides high availability and failover support for DB instances using Multi-AZ deployments. In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone. The primary DB instance is synchronously replicated across Availability Zones to a standby replica to provide data redundancy, eliminate I/O freezes, and minimize latency spikes during system backups. Running a DB instance with high availability can enhance availability during planned system maintenance, and help protect your databases against DB instance failure and Availability Zone disruption. Note that the high-availability feature is not a scaling solution for read-only scenarios; you cannot use a standby replica to serve read traffic. To service read-only traffic, you should use a read replica. http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html ** DONE questions about RDS failover for multi AZ CLOSED: [2015-05-03 Sun 18:06] A user is accessing RDS from an application. The user has enabled the Multi AZ feature with the MS SQL RDS DB. During a planned outage how will AWS ensure that a switch from DB to a standby replica will not affect access to the application?

RDS will have both the DBs running independently and the user has to manually switch over RDS uses DNS to switch over to stand by replica for seamless transition The switch over changes Hardware so RDS does not need to worry about access RDS will have an internal IP which will redirect all requests to the new DB B

In the event of a planned or unplanned outage of a DB instance, Amazon RDS automatically switches to a standby replica in another Availability Zone if the user has enabled Multi AZ. The automatic failover mechanism simply changes the DNS record of the DB instance to point to the standby DB instance. As a result, the user will need to re-establish any existing connections to the DB instance. However, as the DNS is the same, the application can access DB seamlessly. http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html

AWS Certified SysOps Administrator Associate Practice Exam Time Remaining: 04:01 6 of 20. Which of the following is part of the failover process for a Multi-Availability Zone Amazon Relational Database Service (RDS) instance?

A. The IP of the primary DB instance is switched to the standby DB instance. B. The DNS record for the RDS endpoint is changed from primary to standby. C. The failed RDS DB instance reboots. D. A new DB instance is created in the standby availability zone. Mark this item for later review. B ** DONE When RDS autofailover will happen CLOSED: [2015-05-03 Sun 18:07] http://aws.amazon.com/rds/details/multi-az/ Amazon RDS automatically performs a failover in the event of any of the following:

  • Loss of availability in primary Availability Zone
  • Loss of network connectivity to primary
  • Compute unit failure on primary
  • Storage failure on primary

http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html The primary DB instance switches over automatically to the standby replica if any of the following conditions occur:

  • An Availability Zone outage
  • The primary DB instance fails
  • The DB instance's server type is changed
  • The DB instance is undergoing software patching
  • A manual failover of the DB instance was initiated using Reboot with failover

There are several ways to determine if your Multi-AZ DB instance has failed over:

  • DB event subscriptions can be setup to notify you via email or SMS that a failover has been initiated. For more information about events, see Using Amazon RDS Event Notification
  • You can view your DB events via the Amazon RDS console or APIs.
  • You can view the current state of your Multi-AZ deployment via the Amazon RDS console and APIs. ** # --8<-------------------------- separator ------------------------>8-- ** DONE RDS license CLOSED: [2015-04-16 Thu 11:27] http://surajbatuwana.blogspot.com.au/p/aws-certification-sample-questions.html What are the two types of licensing options available for using Amazon RDS for Oracle? A BYOL and Enterprise License B BYOL and License Included C Enterprise License and License Included D Role based License and License Included

Answer: B ** DONE RDS performance question CLOSED: [2015-04-16 Thu 14:21] AWS Certified SysOps Administrator Associate Practice Exam Time Remaining: 05:27 1 of 20. You run a two-tiered web application with the following components: an elastic load balancer (ELB), three web/application servers on Amazon Elastic Compute Cloud (EC2), and one MySQL RDS database. With growing load, database queries take longer and longer and slow down the overall response time for user requests.

What of the following options could speed up performance? Choose 3 answers

A. Create an RDS read-replica and redirect half of the database read requests to it. B. Cache database queries in Amazon ElastiCache. C. Setup RDS in multi-Availability Zone mode. D. Shard the database and distribute load between shards. E. Use Amazon CloudFront to cache database queries. Mark this item for later review. ** DONE RDS read replica CLOSED: [2015-04-16 Thu 14:25] http://aws.amazon.com/rds/faqs/ Q: How many Read Replicas can I create for a given source DB Instance? Amazon RDS for MySQL and PostgreSQL currently allow you to create up to five (5) Read Replicas for a given source DB Instance.

Q: Do Amazon RDS Read Replicas support synchronous replication? No. Read Replicas in Amazon RDS for MySQL and PostgreSQL are implemented using those engines' native asynchronous replication.

Q: Can I create a Read Replica of another Read Replica? Amazon RDS for MySQL: You can create a second-tier Read Replica from an existing first-tier Read Replica. By creating a second-tier Read Replica, you may be able to move some of the replication load from the master database instance to a first-tier Read Replica. Please note that a second-tier Read Replica may lag further behind the master because of additional replication latency introduced as transactions are replicated from the master to the first tier replica and then to the second-tier replica.

Amazon RDS for PostgreSQL: Read Replicas of Read Replicas are not currently supported.

http://surajbatuwana.blogspot.com.au/p/aws-certification-sample-questions.html Is creating a Read Replica of another Read Replica supported? A Only in certain regions B Only with MSSQL based RDS C Only for Oracle RDS types D No ** DONE Mysql Engine: only InnoDB storage engine support crash-recoverable CLOSED: [2015-04-16 Thu 14:38] http://aws.amazon.com/rds/faqs/

The Point-In-Time-Restore and Snapshot Restore features of Amazon RDS for MySQL require a crash-recoverable storage engine and are supported for InnoDB storage engine only.

MyISAM storage engine does not support reliable crash recovery and may result in lost or corrupt data when MySQL is restarted after a crash, preventing Point-In-Time-Restore or Snapshot restore from working as intended.

However, if you still choose to use MyISAM with Amazon RDS, following these steps may be helpful in certain scenarios for Snapshot Restore functionality. ** DONE RDS storage engine Question CLOSED: [2015-04-22 Wed 10:35] http://surajbatuwana.blogspot.com.au/p/aws-certification-sample-questions.html Amazon RDS automated backups and DB Snapshots are currently supported for only the ______ storage engine A InnoDB B MyISAM

A ** DONE RDS failover Question CLOSED: [2015-04-22 Wed 10:36] If you have chosen Multi-AZ deployment, in the event of a planned or unplanned outage of your primary DB Instance, Amazon RDS automatically switches to the standby replica. The automatic failover mechanism simply changes the ______ record of the main DB Instance to point to the standby DB Instance. A DNAME B CNAME C TXT D MX http://surajbatuwana.blogspot.com.au/p/aws-certification-sample-questions.html

B ** DONE RDS I/O questions CLOSED: [2015-04-22 Wed 10:37] When should I choose Provisioned IOPS over Standard RDS storage? A If you have batch-oriented workloads B If you use production online transaction processing (OLTP) workloads. C If you have workloads that are not sensitive to consistent performance

http://surajbatuwana.blogspot.com.au/p/aws-certification-sample-questions.html B ** DONE RDS upgrade test Question CLOSED: [2015-04-22 Wed 10:40] Can I test my DB Instance against a new version before upgrading? A No B Yes C Only in VPC http://surajbatuwana.blogspot.com.au/p/aws-certification-sample-questions.html

B ** DONE [#B] RDS Backup process CLOSED: [2015-04-23 Thu 16:53] http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.BackingUpAndRestoringAmazonRDSInstances.html

A brief I/O freeze, typically lasting a few seconds, occurs during both automated backups and DB snapshot operations on Single-AZ DB instances.

During the backup window, storage I/O may be suspended while your data is being backed up and you may experience elevated latency.

This period of I/O suspension is shorter for Multi-AZ DB deployments, since the backup is taken from the standby, but latency can occur during the backup process.

Changing the backup retention period to 0 turns off automatic backups for the DB instance, and deletes all existing automated backups for the instance. ** DONE RDS events question CLOSED: [2015-05-03 Sun 18:31] A user is planning to setup notifications on the RDS DB for a snapshot. Which of the below mentioned event categories is not supported by RDS for this snapshot source type?

Backup Deletion Restoration Creation

A

Amazon RDS uses the Amazon Simple Notification Service to provide a notification when an Amazon RDS event occurs. Event categories for a snapshot source type include: Creation, Deletion, and Restoration. The Backup is a part of DB instance source type. http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.html ** DONE RDS events notification CLOSED: [2015-05-03 Sun 18:31] A user has setup an RDS DB with Oracle. The user wants to get notifications when someone modifies the security group of that DB. How can the user configure that?

Configure event notification on the DB security group Configure SNS to monitor security group changes It is not possible to get the notifications on a change in the security group Configure the CloudWatch alarm on the DB for a change in the security group

A

Amazon RDS uses the Amazon Simple Notification Service to provide a notification when an Amazon RDS event occurs. These events can be configured for source categories, such as DB instance, DB security group, DB snapshot and DB parameter group. If the user is subscribed to a Configuration Change category for a DB security group, he will be notified when the DB security group is changed.

http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.html ** DONE RDS Changes to the backup window take effect immediately. CLOSED: [2015-05-06 Wed 11:39] http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.BackingUpAndRestoringAmazonRDSInstances.html Changes to the backup window take effect ______. A from the next billing cycle B after 30 minutes C immediately D after 24 hours http://surajbatuwana.blogspot.com.au/p/aws-certification-sample-questions.html ** DONE RDS performance question CLOSED: [2015-05-09 Sat 11:12] A user is using a small MySQL RDS DB. The user is experiencing high latency due to the Multi AZ feature. Which of the below mentioned options may not help the user in this situation?

Use PIOPS Schedule the automated back up in non-working hours Use a large or higher size instance Take a snapshot from standby Replica

D An RDS DB instance which has enabled Multi AZ deployments may experience increased write and commit latency compared to a Single AZ deployment, due to synchronous data replication. The user may also face changes in latency if deployment fails over to the standby replica. For production workloads, AWS recommends the user to use provisioned IOPS and DB instance classes (m1.large and larger) as they are optimized for provisioned IOPS to give a fast, and consistent performance. With Multi AZ feature, the user can not have option to take snapshot from replica. http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html ** # --8<-------------------------- separator ------------------------>8-- :noexport: ** TODO Charge for RDS DB backup? If less than 1GB, will I be charged for 1GB? There is no additional charge for backup storage of up to 100% of your provisioned database storage for an active DB Instance.

  • [#A] AWS VPC: Isolated Cloud Resources :noexport: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Introduction.html
  • Amazon VPC is the networking layer for Amazon EC2.
  • Use a public subnet for resources that must be connected to the Internet
  • A private subnet for resources that won't be connected to the Internet.
  • A NAT instance has an Elastic IP address and is connected to the Internet through an Internet gateway.

Limitation | Resource | Default Limit | |---------------------------------------+---------------| | VPCs per region | 5 | | Internet gateways per region | 5 | | Virtual private gateways per region | 5 | | Customer gateways per region | 50 | | VPN connections per region | 50 | |---------------------------------------+---------------| | Subnets per VPC | 200 | | Security groups per VPC | 100 | | Rules per security group | 50 | | Security groups per network interface | 5 | |---------------------------------------+---------------| | Network interfaces per VPC | 100 | | Network ACLs per VPC | 200 | | Rules per network ACL | 20 |

  • Subnets can't span Availability Zones

  • Elastic IP address | Type | Summary | |------------------------+----------------------------------------------------------------| | EC2-Classic | An EIP is disassociated from your instance when you stop it. | | Default/Nondefault VPC | An EIP remains associated with your instance when you stop it. |

You can modify the VPC to add more subnets or add or remove gateways at any time after the VPC has been created. The four options are:

  • VPC with a Single Public Subnet Only
  • VPC with Public and Private Subnets
  • VPC with Public and Private Subnets and Hardware VPN Access
  • VPC with a Private Subnet Only and Hardware VPN Access

[[file:/Users/mac/Dropbox/private_data/emacs_stuff/images/aws_vpc.png]] ** Why should I use Amazon VPC? http://aws.amazon.com/vpc/faqs/

  • Amazon VPC enables you to build a virtual network in the AWS cloud
  • Define your own network space and control how your network
  • Leverage the greatly enhanced security options in Amazon VPC to provide more granular access for your EC2 instances ** Benefits of Using a VPC http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-vpc.html#concepts-vpc
  • Assign static private IP addresses to your instances that persist across starts and stops
  • Assign multiple IP addresses to your instances
  • Define network interfaces, and attach one or more network interfaces to your instances
  • Change security group membership for your instances while they're running
  • Control the outbound traffic from your instances (egress filtering) in addition to controlling the inbound traffic to them (ingress filtering)
  • Add an additional layer of access control to your instances in the form of network access control lists (ACL)
  • Run your instances on single-tenant hardware ** [#A] Security in Your VPC: SecurityGroup and NACLs :IMPORTANT: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Security.html Amazon VPC provides two features that you can use to increase security for your VPC:
  • Security groups-Act as a firewall for associated Amazon EC2 instances, controlling both inbound and outbound traffic at the instance level
  • Network access control lists (ACLs)-Act as a firewall for associated subnets, controlling both inbound and outbound traffic at the subnet level

You can use AWS Identity and Access Management to control who in your organization has permission to create and manage security groups and network ACLs. For example, you can give only your network administrators that permission, but not personnel who only need to launch instances.

  • Differences between security groups and network ACLs. [[file:/Users/mac/Dropbox/private_data/emacs_stuff/images/security-diagram.png]]

| Security Group | Network ACL | |---------------------------------------------------------------------+-------------------------------------------------------------------------| | Operates at the instance level (first layer of defense) | Operates at the subnet level (second layer of defense) | | Supports allow rules only | Supports allow rules and deny rules | | Is stateful: Return traffic is automatically allowed | Is stateless: | | . regardless of any rules | . Return traffic must be explicitly allowed by rules | | We evaluate all rules before deciding whether to allow traffic | We process rules in number order when deciding whether to allow traffic | | Applies to an instance only if someone specifies the security group | Automatically applies to all instances in the subnets it's | | . when launching the instance, or associates the security group | . associated with (backup layer of defense, so you don't have | | . with the instance later on | . to rely on someone specifying the security group) | ** DONE [#A] Differences Between Security Groups for EC2-Classic and EC2-VPC :IMPORTANT: CLOSED: [2015-04-15 Wed 15:55] http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_SecurityGroups.html

| EC2-Classic | EC2-VPC | |-------------------------------------------------------------------+---------------------------------------------------------------------------------| | You can create up to 500 security groups per region. | You can create up to 100 security groups per VPC. | | You can add up to 100 rules to a security group. | You can add up to 50 rules to a security group. | | You can add rules for inbound traffic only. | You can add rules for inbound and outbound traffic. | | You can assign up to 500 security groups to an instance. | You can assign up to 5 security groups to a network interface. | | You can reference security groups from other AWS accounts. | You can reference security groups for your VPC only. | |-------------------------------------------------------------------+---------------------------------------------------------------------------------| | After you launch an instance, you can't change the security | You can change the security groups assigned to an instance after it's launched. | | . groups assigned to it. | | |-------------------------------------------------------------------+---------------------------------------------------------------------------------| | When you add a rule to a security group, you don't have to | When you add a rule to a security group, you must specify a protocol, and | | . specify a protocol, and only TCP, UDP, or ICMP are available. | . it can be any protocol with a standard protocol number, or all protocols. | |-------------------------------------------------------------------+---------------------------------------------------------------------------------| | When you add a rule to a security group, you must | When you add a rule to a security group, you can specify port numbers | | . specify port numbers (for TCP or UDP). | . only if the rule is for TCP or UDP, and you can specify all port numbers. | ** [#A] IGW (Internet Gateways) http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Internet_Gateway.html

  • An Internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the Internet.

An Internet gateway serves two purposes:

  • Provide a target in your VPC route tables for Internet-routable traffic
  • Perform network address translation (NAT) for instances that have been assigned public IP addresses.

To use an Internet gateway, your subnet's route table must contain a route that directs Internet-bound traffic to the Internet gateway. ** use ClassicLink to enable communication over private IP. http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-classiclink.html http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-vpc.html#concepts-vpc ** # --8<-------------------------- separator ------------------------>8-- ** DONE VPC elastic IP Question CLOSED: [2015-04-15 Wed 12:09] Which of the following will occur when an EC2 instance in a VPC (Virtual Private Cloud) with an associated Elastic IP is stopped and started? (Choose 2 answers) A. The Elastic IP will be dissociated from the instance B. All data on instance-store devices will be lost C. All data on EBS (Elastic Block Store) devices will be lost D. The ENI (Elastic Network Interface) is detached E. The underlying host for the instance is changed

B E ** DONE PCI DSS: Payment Card Industry Data Security Standard CLOSED: [2015-04-15 Wed 14:54] Host a PCI-Compliant E-Commerce Website

E-commerce websites often handle sensitive data, such as credit card information, user profiles, and purchase history. As such, they require a Payment Card Industry Data Security Standard (PCI DSS) compliant infrastructure in order to protect sensitive customer data.

Because AWS is accredited as a Level 1 service provider under PCI DSS, you can run your application on PCI-compliant technology infrastructure for storing, processing, and transmitting credit card information in the cloud. As a merchant, you still have to manage your own PCI certification, but by using an accredited infrastructure service provider, you don't need to put additional effort into PCI compliance at the infrastructure level. For more information about PCI compliance, go to the AWS Compliance Center. ** DONE Amazon EC2 security groups control only ingress, but VPC security group can control both CLOSED: [2015-04-15 Wed 15:33] http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_SecurityGroups.html Amazon VPC offers additional security features over the Amazon EC2-Classic environment. VPC security groups allow you to control both ingress and egress traffic (Amazon EC2 security groups control only ingress), and you can define rules for all IP protocols and ports. ** DONE [#B] public subnet, private subnet and CLOSED: [2015-04-15 Wed 16:54] http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html

http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/images/subnets-diagram.png

  • AWS reserves both the first four IP addresses and the last IP address in each subnet CIDR block.
  • Every subnet that you create is automatically associated with the main route table for the VPC.

| Subnet | Summary | |-----------------+-------------------------------------------------------------------------------------------| | public subnet | public subnet's traffic is routed to an Internet gateway | | private subnet | private subnets don't have a route to the Internet gateway | | VPN-only subnet | no route to the Internet gateway, but has its traffic routed to a virtual private gateway | ** DONE [#A] VPC peering: connects two VPCs by private IP addresses. CLOSED: [2015-04-15 Wed 18:13] http://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/Welcome.html http://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/invalid-peering-configurations.html To enable the flow of traffic, both local VPC and peer VPC need an extra route rule to routing tables

You are charged for data transfer within a VPC peering connection at the same rate as you are charged for data transfer across Availability Zones.

Limitation of VPC Peer

  • You cannot create a VPC peering connection between VPCs in different regions.
  • You cannot create a VPC peering connection between VPCs that have matching or overlapping CIDR blocks.
  • VPC peering does not support transitive peering relationships
  • The Maximum Transmission Unit (MTU) across a VPC peering connection is 1500 bytes.

To establish a VPC peering connection, the owner of the requester VPC (or local VPC) sends a request to the owner of the peer VPC to create the VPC peering connection. The peer VPC can be owned by you, or another AWS account, and cannot have a CIDR block that overlaps with the requester VPC's CIDR block. The owner of the peer VPC has to accept the VPC peering connection request to activate the VPC peering connection. To enable the flow of traffic between the peer VPCs using private IP addresses, add a route to one or more of your VPC's route tables that points to the IP address range of the peer VPC. The owner of the peer VPC adds a route to one of their VPC's route tables that points to the IP address range of your VPC. You may also need to update the security group rules that are associated with your instance to ensure that traffic to and from the peer VPC is not restricted. For more information about security groups, see Security Groups for Your VPC. ** DONE How to launch instances into EC2-Classic? CLOSED: [2015-04-15 Wed 18:25] https://support.rightscale.com/09-Clouds/AWS/FAQs/What_is_an_EC2-Classic_network%3F

Since the inception of EC2-VPC, all new AWS accounts are automatically on the EC2-VPC platform so you do not have the choice to launch instances into EC2-Classic if you are a new customer. ** DONE VPC performance: IGW is HA and no bandwidth constraints CLOSED: [2015-04-16 Thu 09:33] AWS Certified SysOps Administrator Associate Practice Exam Time Remaining: 01:29 15 of 20. Your infrastructure makes use of a single m3.medium NAT instance inside a VPC to allow inside hosts to reach out to the Internet without being directly addressable via the Internet. As your infrastructure has grown, you are finding the amount of traffic going through the NAT instance is overwhelming it and slowing down communications.

What two solutions would increase available bandwidth? Choose 2 answers

A. Add another IGW to your VPC. B. Increase the class size of the NAT instance from an m3.medium to an m3.xlarge. C. Use Direct Connect to route all traffic through your VPC and back to the Internet instead of a NAT. D. Add another NAT instance and configure your subnet route tables to be spread across the two NAT instances. E. Route outbound traffic through an elastic load balancer (ELB) rather than a NAT, thus taking advantage of an ELB's ability to scale to match demand. Mark this item for later review.

http://aws.amazon.com/vpc/faqs/#C9 Q. Are there any bandwidth limitations for Internet Gateways? Do I need to be concerned about its availability? Can it be a single point of failure? No. The Internet Gateway is horizontally-scaled, redundant, and highly-available. It imposes no bandwidth constraints. ** DONE VPC SPOF question CLOSED: [2015-04-16 Thu 09:34] AWS Certified SysOps Administrator Associate Practice Exam Time Remaining: 09:22 17 of 20. An organization has configured a VPC with an Internet Gateway (IGW), pairs of public and private subnets (each with one subnet per Availability Zone), and a dual-tunnel VPN connection between their Virtual Private Gateway (VGW) and a router in their data center. The organization would like to eliminate any potential single points of failure in this design.

What step should you take to achieve this organization's objective?

A. Nothing: there are no single points of failure in this architecture. B. Create and attach a second IGW to provide redundant Internet connectivity. C. Create, attach, and configure a second VGW to provide redundant VPN connectivity. D. Configure a second router in the data center and establish a second dual-tunnel VPN connection with the VGW. Mark this item for later review. ** DONE [#A] NAT Instance: enable instances in private subnet to get outgoing internet traffic :IMPORTANT: CLOSED: [2015-04-16 Thu 10:44] http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/images/nat-instance-diagram.png

http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html

  • The main route table sends the traffic from the instances in the private subnet to the NAT instance in the public subnet.

On the Routes tab, click Edit, specify 0.0.0.0/0 in the Destination box, select the instance ID of the NAT instance from the Target list, and then click Save.

http://aws.amazon.com/vpc/faqs/#C9 Q. How do instances without EIPs access the Internet?

Instances without EIPs can access the Internet in one of two ways: Instances without EIPs can route their traffic through a NAT instance to access the Internet. These instances use the EIP of the NAT instance to traverse the Internet. The NAT instance allows outbound communication but doesn't enable machines on the Internet to initiate a connection to the privately addressed machines using NAT. For VPCs with a Hardware VPN connection, instances can route their Internet traffic down the Virtual Private Gateway to your existing datacenter. From there, it can access the Internet via your existing egress points and network security/monitoring devices. ** DONE VPC question: subnet security CLOSED: [2015-04-16 Thu 11:26] AWS Certified SysOps Administrator Associate Practice Exam Time Remaining: 01:54 12 of 20. You manage your companyˈs VPC in an AWS region and give other teams access to create instances and modify security groups inside subnets dedicated to their teams. You need to make sure the development team canˈt do anything in their subnets that could allow their instances to impact the production environment instances in the production subnets.

How can you separate parts of your VPC so the instances for development canˈt interfere with the ones from production?

A. Make sure the subnets only allow routing via an IGW and not the local router. B. Set up NACLs that restrict what subnets can talk to each other. C. Put the two subnets into CIDR blocks that are very far apart. D. Make sure the development subnets are in one Availability Zone and the production is in another. Mark this item for later review.

Answer: B ** DONE VPC security CLOSED: [2015-04-16 Thu 11:27] AWS Certified SysOps Administrator Associate Practice Exam Time Remaining: 02:18 11 of 20. You have an Amazon VPC with one private subnet and one public subnet with a Network Address Translator (NAT) server. You are creating a group of Amazon Elastic Cloud Compute (EC2) instances that configure themselves at startup via downloading a bootstrapping script from Amazon Simple Storage Service (S3) that deploys an application via GIT.

Which setup provides the highest level of security?

A. Amazon EC2 instances in private subnet, no EIPs, route outgoing traffic via the NAT B. Amazon EC2 instances in public subnet , no EIPs, route outgoing traffic via the Internet Gateway (IGW) C. Amazon EC2 instances in private subnet, assign EIPs, route outgoing traffic via the Internet Gateway (IGW) D. Amazon EC2 instances in public subnet, assign EIPs, route outgoing traffic via the NAT Mark this item for later review.

Answer: A ** DONE Default VPC address: */16. (65535), Default subnet */20(4096) CLOSED: [2015-05-01 Fri 22:00] After setting up a VPC for your own testing purposes you now decide to set up a subnet but are unsure how many addresses will be allocated per subnet. By default how many addresses are allocated per subnet?

4,096 64 2048 1024

A

The CIDR block for a default VPC is always 172.31.0.0/16. This provides up to 65,536 private IP addresses. The netmask for a default subnet is always /20, which provides up to 4,096 addresses per subnet, a few of which are reserved for our use. By default, a default subnet is a public subnet, because the main route table sends the subnet's traffic that is destined for the Internet to the Internet gateway. You can make a default subnet a private subnet by removing the route from the destination 0.0.0.0/0 to the Internet gateway. However, if you do this, any EC2 instance running in that subnet can't access the Internet or other AWS products, such as Amazon Simple Storage Service (Amazon S3). http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/default-vpc.html ** DONE Benefits for VCP Instances over EC2-Classic Instances CLOSED: [2015-05-01 Fri 19:17] By launching your instances into a VPC instead of EC2-Classic, you gain the ability to:

  • Assign static private IP addresses to your instances that persist across starts and stops
  • Assign multiple IP addresses to your instances
  • Define network interfaces, and attach one or more network interfaces to your instances
  • Change security group membership for your instances while they're running
  • Control the outbound traffic from your instances (egress filtering) in addition to controlling the inbound traffic to them (ingress filtering)
  • Add an additional layer of access control to your instances in the form of network access control lists (ACL)
  • Run your instances on single-tenant hardware

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-vpc.html ** DONE VPC: security group difference: between new created and default CLOSED: [2015-05-02 Sat 16:18] A user has configured a VPC with a new subnet. The user has created a security group. The user wants to configure that instances of the same subnet communicate with each other. How can the user configure this with the security group?

Configure the subnet as the source in the security group and allow traffic on all the protocols and ports The user has to use VPC peering to configure this Configure the security group itself as the source and allow traffic on all the protocols and ports There is no need for a security group modification as all the instances can communicate with each other inside the same subnet

C

A Virtual Private Cloud (VPC) is a virtual network dedicated to the user's AWS account. AWS provides two features that the user can use to increase security in VPC: security groups and network ACLs. Security groups work at the instance level. If the user is using the default security group it will have a rule which allows the instances to communicate with other. For a new security group the user has to specify the rule, add it to define the source as the security group itself, and select all the protocols and ports for that source

http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario1.html ** # --8<-------------------------- separator ------------------------>8-- ** DONE VPC security question CLOSED: [2015-05-06 Wed 10:02] You can use _____ and _____ to help secure the instances in your VPC. A security groups and multi-factor authentication B security groups and 2-Factor authentication C security groups and biometric authentication D security groups and network ACLs

D ** DONE VPC subnet question CLOSED: [2015-05-06 Wed 10:53] You are setting up your first Amazon Virtual Private Cloud (Amazon VPC) network so you decide you should probably use the AWS Management Console and the VPC Wizard. Which of the following is not an option for network architectures after launching the "Start VPC Wizard" in Amazon VPC page on the AWS Management Console?

VPC with a Public Subnet Only and Hardware VPN Access VPC with Public and Private Subnets VPC with a Private Subnet Only and Hardware VPN Access VPC with Public and Private Subnets and Hardware VPN Access

A

Amazon VPC enables you to build a virtual network in the AWS cloud - no VPNs, hardware, or physical datacenters required. Your AWS resources are automatically provisioned in a ready-to-use default VPC. You can choose to create additional VPCs by going to Amazon VPC page on the AWS Management Console and click on the "Start VPC Wizard" button. You'll be presented with four basic options for network architectures. After selecting an option, you can modify the size and IP address range of the VPC and its subnets. If you select an option with Hardware VPN Access, you will need to specify the IP address of the VPN hardware on your network. You can modify the VPC to add more subnets or add or remove gateways at any time after the VPC has been created. The four options are: VPC with a Single Public Subnet Only VPC with Public and Private Subnets VPC with Public and Private Subnets and Hardware VPN Access VPC with a Private Subnet Only and Hardware VPN Access https://aws.amazon.com/vpc/faqs/ ** TODO [#A] VPC subnet question AWS Certified Solutions Architect Associate Practice Exam Time Remaining: 12:17 1 of 20. You have been tasked with creating a VPC network topology for your company. The VPC network must support both Internet-facing applications and internally-facing applications accessed only over VPN. Both Internet-facing and internally-facing applications must be able to leverage at least three AZs for high availability. At a minimum, how many subnets must you create within your VPC to accomodate these requirements?

A. 2 B. 3 C. 4 D. 6 Mark this item for later review. ** TODO [#A] Lift and shift of an existing on-premises application to AWS ** DONE [#A] Attaching another network interface to an instance is not a method to increase the network bandwidth CLOSED: [2015-05-06 Wed 11:57] You need to create a management network using network interfaces for a virtual private cloud (VPC) network. Which of the following statements is incorrect pertaining to Best Practices for Configuring Network Interfaces.

When launching an instance from the CLI or API, you can specify the network interfaces to attach to the instance for both the primary (eth0) and additional network interfaces. You can attach a network interface to an instance when it's running (hot attach), when it's stopped (warm attach), or when the instance is being launched (cold attach). You can attach a network interface in one subnet to an instance in another subnet in the same VPC, however, both the network interface and the instance must reside in the same Availability Zone. Attaching another network interface to an instance is a valid method to increase or double the network bandwidth to or from the dual-homed instance

D

Best Practices for Configuring Network Interfaces You can attach a network interface to an instance when it's running (hot attach), when it's stopped (warm attach), or when the instance is being launched (cold attach). You can detach secondary (ethN) network interfaces when the instance is running or stopped. However, you can't detach the primary (eth0) interface. You can attach a network interface in one subnet to an instance in another subnet in the same VPC, however, both the network interface and the instance must reside in the same Availability Zone. When launching an instance from the CLI or API, you can specify the network interfaces to attach to the instance for both the primary (eth0) and additional network interfaces. Launching an instance with multiple network interfaces automatically configures interfaces, private IP addresses, and route tables on the operating system of the instance. A warm or hot attach of an additional network interface may require you to manually bring up the second interface, configure the private IP address, and modify the route table accordingly. (Instances running Amazon Linux automatically recognize the warm or hot attach and configure themselves.) Attaching another network interface to an instance is not a method to increase or double the network bandwidth to or from the dual-homed instance.

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#use-network-and-security-appliances-in-your-vpc ** TODO AWS network assets Which of the following must be included in the network diagram for AWS network assets: A. Major network zones and primary services B. Workstations C. Wireless network connection points D. Remote network connection points E. A, C, and D F. B, C, and D G. A, B, and C

E ** DONE Feature: User created subnet is different from default subnet CLOSED: [2015-05-07 Thu 16:37] A user has created a subnet with VPC and launched an EC2 instance in that subnet with only default settings. Which of the below mentioned options is ready to use on the EC2 instance as soon as it is launched?

Internet gateway Private IP Elastic IP Public IP

B A Virtual Private Cloud (VPC) is a virtual network dedicated to a user's AWS account. A subnet is a range of IP addresses in the VPC. The user can launch the AWS resources into a subnet. There are two supported platforms into which a user can launch instances: EC2-Classic and EC2-VPC. When the user launches an instance which is not a part of the non-default subnet, it will only have a private IP assigned to it. The instances part of a subnet can communicate with each other but cannot communicate over the internet or to the AWS services, such as RDS / S3. http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Introduction.html

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.html

| Characteristic | Default Subnet | Nondefault Subnet | |----------------+----------------------------------------+--------------------| | DNS hostnames | DNS hostnames are enabled by default. | Disable by default | | Public IP | receive a public IP address by default | doesn't by default | ** DONE [#A] Concept: VPC subnet: what's the IP range for 20.0.0.128/25 CLOSED: [2015-05-08 Fri 10:55] | CIDR | Range | |---------------+-------------------------| | 20.0.0.128/25 | 20.0.0.128 - 20.0.0.255 | | 20.0.0.0/25 | 20.0.0.0 - 20.0.0.127 | | 20.0.0.0/24 | 20.0.0.0 - 20.0.0.255 |

A user has created a VPC with CIDR 20.0.0.0/24. The user has created a public subnet with CIDR 20.0.0.0/25. The user is trying to create the private subnet with CIDR 20.0.0.128/25. Which of the below mentioned statements is true in this scenario?

It will not allow the user to create the private subnet due to a CIDR overlap This statement is wrong as AWS does not allow CIDR 20.0.0.0/25 It will allow the user to create a private subnet with CIDR as 20.0.0.128/25 It will not allow the user to create a private subnet due to a wrong CIDR range When the user creates a subnet in VPC, he specifies the CIDR block for the subnet. The CIDR block of a subnet can be the same as the CIDR block for the VPC (for a single subnet in the VPC), or a subset (to enable multiple subnets). If the user creates more than one subnet in a VPC, the CIDR blocks of the subnets must not overlap. Thus, in this case the user has created a VPC with the CIDR block 20.0.0.0/24, which supports 256 IP addresses (20.0.0.0 to 20.0.0.255). The user can break this CIDR block into two subnets, each supporting 128 IP addresses. One subnet uses the CIDR block 20.0.0.0/25 (for addresses 20.0.0.0 - 20.0.0.127) and the other uses the CIDR block 20.0.0.128/25 (for addresses 20.0.0.128 - 20.0.0.255).

http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html ** DONE feature: VPN CloudHub CLOSED: [2015-05-08 Fri 15:29] Your manager has just given you access to multiple VPN connections that someone else has just set up between all your companies offices and needs you to make sure that the communication between the VPNs is secure. Which of the following services would be best for providing a low-cost hub-and-spoke model for primary or backup connectivity between these remote offices?

AWS VPN CloudHub AWS OpsWorks AWS CloudHSM Amazon CloudFront

D If you have multiple VPN connections, you can provide secure communication between sites using the AWS VPN CloudHub. The VPN CloudHub operates on a simple hub-and-spoke model that you can use with or without a VPC. This design is suitable for customers with multiple branch offices and existing Internet connections who'd like to implement a convenient, potentially low-cost hub-and-spoke model for primary or backup connectivity between these remote offices. http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPN_CloudHub.html ** DONE [#B] For Source field of inbounds rules in security group, we can use security group id, instead of CDIR CLOSED: [2015-05-09 Sat 10:05] A user has configured a VPC with a new subnet. The user has created a security group. The user wants to configure that instances of the same subnet communicate with each other. How can the user configure this with the security group?

Configure the security group itself as the source and allow traffic on all the protocols and ports There is no need for a security group modification as all the instances can communicate with each other inside the same subnet Configure the subnet as the source in the security group and allow traffic on all the protocols and ports The user has to use VPC peering to configure this

A

A Virtual Private Cloud (VPC) is a virtual network dedicated to the user's AWS account. AWS provides two features that the user can use to increase security in VPC: security groups and network ACLs. Security groups work at the instance level. If the user is using the default security group it will have a rule which allows the instances to communicate with other. For a new security group the user has to specify the rule, add it to define the source as the security group itself, and select all the protocols and ports for that source.

http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario1.html

  • --8<-------------------------- separator ------------------------>8-- :noexport:

  • [#A] Amazon DynamoDB: Predictable and Scalable NoSQL Data Store :noexport:IMPORTANT: Limitation | Name | Comment | |-----------------------------------------+---------| | Maximum number of tables | 256 | | item size | 400 KB | |-----------------------------------------+---------| | Read capacity units (individual table) | 10,000 | | Write capacity units (individual table) | 10,000 | |-----------------------------------------+---------| | Read capacity units (account) | 20,000 | | Write capacity units (account) | 20,000 |

DynamoDB Capacity Units: | Capacity Units | How to Calculate | |----------------+--------------------------------------------------------------------------------------| | Reads | Number of item reads per second x 4 KB item size | | | (If you use eventually consistent reads, you'll get twice as many reads per second.) | |----------------+--------------------------------------------------------------------------------------| | Writes | Number of item writes per second x 1 KB item size |

  • Amazon DynamoDB stores 3 geographically distributed replicas of each table for high availability and data durability.

  • All data items are stored on solid state disks (SSDs) and are automatically replicated across multiple Availability Zones in a Region to provide built-in high availability and data durability.

  • Amazon DynamoDB supports both document and key-value data structures.

  • Individual items in a DynamoDB table can have any number of attributes, although there is a limit of 400 KB on the item size. ** DONE [#B] DynamoDB supports a "conditional write" feature CLOSED: [2015-04-22 Wed 11:08] http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithTables.html Most write operations in DynamoDB allow conditional writes, where you specify one or more conditions that must be met in order for the operation to succeed.

http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/APISummary.html In the "lost update" example, client 2 can add a condition to verify item values on the server-side are same as the item copy on the client-side. If the item on the server is updated, client 2 can choose to get an updated copy before applying its own updates.

#+BEGIN_EXAMPLE Conditional Updates and Concurrency Control

In a multiuser environment, it is important to ensure data updates made by one client don't overwrite updates made by another client. This "lost update" problem is a classic database concurrency issue. Suppose two clients read the same item. Both clients get a copy of that item from DynamoDB. Client 1 then sends a request to update the item. Client 2 is not aware of any update. Later, Client 2 sends its own request to update the item, overwriting the update made by Client 1. Thus, the update made by Client 1 is lost.

DynamoDB supports a "conditional write" feature that lets you specify a condition when updating an item. DynamoDB writes the item only if the specified condition is met; otherwise it returns an error. In the "lost update" example, client 2 can add a condition to verify item values on the server-side are same as the item copy on the client-side. If the item on the server is updated, client 2 can choose to get an updated copy before applying its own updates. #+END_EXAMPLE ** DONE [#B] DynamoDB also supports an "atomic counter" feature CLOSED: [2015-04-22 Wed 11:10] In this case, the client only wants to increment a value regardless of what the previous value was.

http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/APISummary.html #+BEGIN_EXAMPLE DynamoDB also supports an "atomic counter" feature where you can send a request to add or subtract from an existing attribute value without interfering with another simultaneous write request. For example, a web application might want to maintain a counter per visitor to its site. In this case, the client only wants to increment a value regardless of what the previous value was. DynamoDB write operations support incrementing or decrementing existing attribute values. #+END_EXAMPLE ** DONE [#A] DynamoDB capacity units: Read Capacity Units and Write Capacity Units CLOSED: [2015-04-22 Wed 12:10] http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ProvisionedThroughputIntro.html

  • For example, if BatchGetItem reads a 1.5 KB item and a 6.5 KB item, DynamoDB will calculate the size as 12 KB (4 KB + 8 KB), not 8 KB (1.5 KB + 6.5 KB).

When you create or update a table, you specify how much provisioned throughput capacity you want to reserve for reads and writes. DynamoDB will reserve the necessary machine resources to meet your throughput needs while ensuring consistent, low-latency performance.

  • For tables with secondary indexes, DynamoDB consumes additional capacity units. For example, if you wanted to add a single 1 KB item to a table, and that item contained an indexed attribute, then you would need two write capacity units-one for writing to the table, and another for writing to the index.

  • You cannot group multiple items in a single read operation, even if the items together are 4 KB or smaller. For example, if your items are 3 KB and you want to read 80 items per second from your table, then you need to provision 80 (reads per second) x 1 (3 KB / 4 KB = 0.75, rounded up to the next whole number) = 80 read capacity units for strong consistency. For eventual consistency, you need to provision only 40 read capacity units.

  • You can use the Query and Scan operations in DynamoDB to retrieve multiple consecutive items from a table or an index in a single request.

    With these operations, DynamoDB uses the cumulative size of the processed items to calculate provisioned throughput. For example, if a Query operation retrieves 100 items that are 1 KB each, the read capacity calculation is not (100 x 4 KB) = 100 read capacity units, as if those items were retrieved individually using GetItem or BatchGetItem. Instead, the total would be only 25 read capacity units ((100 * 1024 bytes) = 100 KB, which is then divided by 4 KB). For more information see Item Size Calculations. ** # --8<-------------------------- separator ------------------------>8-- ** DONE DynamoDB primary key: "Hash Primary Key" and "Hash and Range Primary Key" CLOSED: [2015-04-22 Wed 16:59] http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithTables.html

Hash and Range Primary Key - The primary key is made of two attributes. The first attribute is the hash attribute and the second attribute is the range attribute.

It is possible for two items to have the same hash key value, but those two items must have different range key values. ** DONE Global Secondary Indexes: enable fast query lookups based on chosen attributes. CLOSED: [2015-04-23 Thu 14:59]

  • [#A] Amazon EBS: Elastic Block Store :noexport: http://aws.amazon.com/ebs/
  • EBS provides persistent block level storage volumes for use with Amazon EC2 instances in the AWS Cloud.

  • Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you from component failure, offering high availability and durability.

Limitation | Name | Comment | |--------------------------------------------------------+---------| | Total volume storage of General Purpose (SSD) volumes | 20 TiB | | Total volume storage of Provisioned IOPS (SSD) volumes | 20 TiB | | Total volume storage of Magnetic volumes | 20 TiB | |--------------------------------------------------------+---------| | Total provisioned IOPS | 40,000 | |--------------------------------------------------------+---------| | Number of EBS volumes | 5,000 | | Number of EBS snapshots | 10,000 |

  • Feature: Characteristics for different EBS volume type http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html | | Magnetic | General Purpose (SSD) | Provisioned IOPS (SSD) | |--------------------+-------------------------------+----------------------------+----------------------------------------| | Use cases | Cold workloads where | System boot volumes | Critical business applications that | | | data is infrequently accessed | Virtual desktops | require sustained IOPS performance, or | | | | Small to medium sized | more than 10,000 IOPS or 160 MiB/s | | | Scenarios where the lowest | databases, Development and | of throughput per volume, Large | | | storage cost is important | test environments | database workloads, such as: MongoDB, | | | | | Microsoft SQL Server, MySQL, | | | | | PostgreSQL, Oracle | |--------------------+-------------------------------+----------------------------+----------------------------------------| | Volume size | 1 GiB - 1 TiB | 1 GiB - 16 TiB | 4 GiB - 16 TiB | | Maximum throughput | 40-90 MiB/s | 160 MiB/s | 320 MiB/s | |--------------------+-------------------------------+----------------------------+----------------------------------------| | IOPS performance | Averages 100 IOPS, with the | Baseline performance of | Consistently performs at provisioned | | | ability to burst to hundreds | 3 IOPS/GiB (up to | level, up to 20,000 IOPS maximum | | | of IOPS | 10,000 IOPS). | | | | | with the ability to burst | | | | | to 3,000 IOPS for volumes | | | | | under 1,000 GiB. | | |--------------------+-------------------------------+----------------------------+----------------------------------------| | volume name | standard | gp2 | io1 | ** TODO [#A] How Amazon implement EBS? By Raid? ** TODO [#A] EBS snapshot is done asynchronously, what does this mean in the backend logic? http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-creating-snapshot.html Snapshots occur asynchronously and the status of the snapshot is pending until the snapshot is complete.

A sys admin is trying to understand EBS snapshots. Which of the below mentioned statements will not be useful to the admin to understand the concepts about a snapshot?

The snapshot is synchronous The snapshot is incremental It is recommended to stop the instance before taking a snapshot for consistent data The snapshot captures the data that has been written to the hard disk when the snapshot command was executed

A

The AWS snapshot is a point in time backup of an EBS volume. When the snapshot command is executed it will capture the current state of the data that is written on the drive and take a backup. For a better and consistent snapshot of the root EBS volume, AWS recommends stopping the instance. For additional volumes it is recommended to unmount the device. The snapshots are asynchronous and incremental. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html ** TODO [#A] When taking EBS snapshot which is in use, I/O on the volume will not be frozen automatically? http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-creating-snapshot.html

  • You can take a snapshot of an attached volume that is in use. In this way, snapshot EBS may exclude any data that has been cached by any applications or the operating system.

  • If you can pause any file writes to the volume long enough to take a snapshot, your snapshot should be complete.

  • However, if you can't pause all file writes to the volume, you should unmount the volume from within the instance. ** # --8<-------------------------- separator ------------------------>8-- ** DONE Feature: Encrypted EBS volumes and snapshots CLOSED: [2015-05-02 Sat 13:57]

  • The first time you create an encrypted volume in a region, a default CMK is created for you automatically.

  • Encrypted boot volumes are not supported at this time.

  • There is no way to directly create an unencrypted volume from an encrypted snapshot or vice versa.

  • There is also no way to encrypt an existing volume. *** TODO How to change unencrypted EBS to encrypted? *** Encrypted snapshots cannot be shared with anyone http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-creating-snapshot.html

  • Encrypted snapshots cannot be shared with anyone, because your volume encryption keys and master key are specific to your account.

--8<-------------------------- separator ------------------------>8--

A user has stored data on an encrypted EBS volume. The user wants to share the data with his friend's AWS account. How can user achieve this?

Copy the data to an unencrypted volume and then share Create an AMI from the volume and share the AMI Take a snapshot and share the snapshot with a friend If both the accounts are using the same encryption key then the user can share the volume directly

A AWS EBS supports encryption of the volume. It also supports creating volumes from existing snapshots provided the snapshots are created from encrypted volumes. If the user is having data on an encrypted volume and is trying to share it with others, he has to copy the data from the encrypted volume to a new unencrypted volume. Only then can the user share it as an encrypted volume data. Otherwise the snapshot cannot be shared.

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html *** DONE [#A] EBS encyption: encrypt volume with AES-256 algorithm and CMK(Customer Master Keys) CLOSED: [2015-05-04 Mon 08:50] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html

  • Encrypted boot volumes are not supported at this time.

  • There is no way to directly create an unencrypted volume from an encrypted snapshot or vice versa.

  • Public or shared snapshots of encrypted volumes are not supported, because other accounts would not be able to decrypt your data.

  • There is also no way to encrypt an existing volume. However, you can migrate existing data between encrypted volumes and unencrypted volumes. ** DONE [#A] Delete EBS snapshot: Deleting previous EBS snapshots doesn't affect restoring from later snapshots CLOSED: [2015-05-04 Mon 15:08] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-deleting-snapshot.html

  • When you delete a snapshot, only the data exclusive to that snapshot is removed. Deleting previous snapshots of a volume do not affect your ability to restore volumes from later snapshots of that volume.

  • You can't delete a snapshot of the root device of an EBS volume used by a registered AMI. *** DONE question CLOSED: [2015-05-04 Mon 15:07] A user has created three EBS snapshots on 3 consecutive days. If the Day-1 snapshot refers to blocks "A-B-C", Day 2 has modified the "B" block as well a newly added "D" block while Day-3 has "modified "B" from the previous day and a newly added "E" block. If the user deletes the snapshot of Day-2, what will happen?

It will delete the block which was referred as modified "B" of Day 2 delete the block which was referred as modified "B" as well newly created "D" of Day 2 Nothing. The user cannot delete the snapshot as it is being referred in other blocks It will remove the snapshot of Day-2 and the user will lose the contents of Blocks-"B" and "D"

A Snapshot is an incremental concept and EBS takes a snapshot of only the modified blocks of volume. In this case Day-2 will have blocks of modified "B" and new "D". Since "D" is not modified on Day-3 it will not be deleted. However, as "B" was modified on Day-3 it is not being referred by any snapshot and EBS will delete that block.

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html ** DONE [#A] Share EBS snapshots: allow AWS accounts to create volume permssion CLOSED: [2015-05-04 Mon 15:05] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-creating-snapshot.html http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html

  • You can't share encrypted snapshots

    Encrypted snapshots cannot be shared with anyone, because your volume encryption keys and master key are specific to your account. If you need to share your encrypted snapshot data, you can migrate the data to an unencrypted volume and share a snapshot of that volume.

  • Snapshots are constrained to the region in which they are created. If you would like to share a snapshot with another region, you need to copy the snapshot to that region *** DONE Share AWS AMI question CLOSED: [2015-05-04 Mon 16:09] A user is sharing the AWS AMI with selected users. Will the new user be able to create a volume from the shared AMI?

Yes, always. No, never. Yes, provided the owner has given the launch instance permission. Yes, provided the owner has given the create volume permission.

D In Amazon Web Services, when a user is sharing an AMI with another user, the owner needs to give explicit permission to other users to create a volume from the snapshot. Otherwise the other user cannot create a volume from the snapshot.

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/sharingamis-explicit.html ** DONE [#A] Feature: What does point-in-time snapshots mean? CLOSED: [2015-04-01 Wed 00:33] http://docs.oracle.com/cd/E19050-01/sun.cluster31/817-4229/appreplication-43/index.html

http://it.toolbox.com/blogs/storage-360-degrees/point-in-time-data-snapshotsgood-or-bad-45364

  • "Point in Time" snapshots are created from the original data source in a pointer based approach.

"Point in Time" snapshots are created from the original data source in a pointer based approach. This method of capturing a mountable picture of how data looked at a point in time is fast, and requires a fraction of the space of the original source LUN. When data is written to the source disk after a "Point in Time" snapshot is taken, the original block/blocks of data are written to the snapshot or the "change pool" then the change is committed to the source disk. Here you can see that the "Point in Time" snapshot is useless without the original LUN since unchanged blocks are only referenced as pointers to the original source LUN. Think of the relationship a traditional differential backup has with a full backup; without the full back tape the differentials are worthless.

http://aws.amazon.com/ebs/details/

Amazon EBS provides the ability to save point-in-time snapshots of your volumes to Amazon S3. Amazon EBS Snapshots are stored incrementally: only the blocks that have changed after your last snapshot are saved, and you are billed only for the changed blocks. If you have a device with 100 GB of data but only 5 GB has changed after your last snapshot, a subsequent snapshot consumes only 5 additional GB and you are billed only for the additional 5 GB of snapshot storage, even though both the earlier and later snapshots appear complete. ** DONE [#A] EBS Performance tips: Pre-warming, instance type, CLOSED: [2015-05-04 Mon 09:00] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSPerformance.html

Several factors can affect the performance of Amazon EBS volumes, such as instance configuration, I/O characteristics, workload demand, and storage configuration.

--8<-------------------------- separator ------------------------>8--

When a user is accessing an EBS for the first time there can be a huge reduction in the I/O. How can the user avoid this?

Use PIOPS to increase the I/O first time Pre-warm the EBS volumes as per the requirement There is no decrease in the I/O performance the first time and the user can use it as it is Use a higher instance size which gives better I/O

B

There is a 5 to 50 percent reduction in IOPS when the user first accesses each block of data on a newly created or restored EBS volume. The user can avoid this performance hit by accessing each block in advance or pre-warm the Amazon EBS volumes. ** # --8<-------------------------- separator ------------------------>8-- ** TODO Whether EBS has consistency delays for write operation? http://www.datastax.com/dev/blog/what-is-the-story-with-aws-storage ** [#A] Amazon EBS Volume Types http://aws.amazon.com/ebs/details/ | Volume Type | Amazon EBS General Purpose (SSD) Volumes | EBS Provisioned IOPS (SSD) | EBS Magnetic | |-----------------------+------------------------------------------+----------------------------+------------------------| | Use Case | Boot volumes, Small to Med DBs | I/O intensive, DB | Infrequent Data Access | | Max IOPS/volume | 10,000 | 20,000 | 40 - 200 | | Max throughput/volume | 160 MBps | 320 MBps | 40 - 90 MBps | | API Name | gp2 | io1 | standard |

  • To maximize the benefit of Provisioned IOPS (SSD) volumes, we recommend using EBS-optimized EC2 instances. ** DONE Amazon EBS Snapshots CLOSED: [2015-04-01 Wed 00:22] http://aws.amazon.com/ebs/details/#snapshots

  • Amazon EBS Snapshots are stored incrementally

    only the blocks that have changed after your last snapshot are saved, and you are billed only for the changed blocks. If you have a device with 100 GB of data but only 5 GB has changed after your last snapshot, a subsequent snapshot consumes only 5 additional GB and you are billed only for the additional 5 GB of snapshot storage, even though both the earlier and later snapshots appear complete. ** DONE Feature: EBS create volume from snapshot: lazy loading CLOSED: [2015-05-04 Mon 14:58] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html When you create a volume from an existing snapshot, it loads lazily in the background so that you can begin using them right away.

If you access a piece of data that hasn't been loaded yet, the volume immediately downloads the requested data from Amazon S3, and then continues loading the rest of the volume's data in the background. ** DONE Procedure to create EBS snapshot CLOSED: [2015-05-08 Fri 15:43] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-creating-snapshot.html

Case 1. If you can pause any file writes to the volume long enough to take a snapshot, your snapshot should be complete.

Case 2: However, if you can't pause all file writes to the volume, you should unmount the volume from within the instance, issue the snapshot command ** DONE EBS snapshot: For the first time it copy all data written on volume and not copy the blank/empty blocks. CLOSED: [2015-05-04 Mon 16:15] A user has created an EBS volume of 10 GB. The user takes the first snapshot of that volume. What will happen when the snapshot is taken?

AWS will create a snapshot of the modified content in the same AZ of the region The I/O on the volume will be frozen while a snapshot is being taken AWS will copy all the blocks from EBS and create a snapshot AWS will create a snapshot of only blocks which are written on the volume

D When a user creates a snapshot it asynchronously copies the data modified on the EBS volume. It does not copy the whole volume or all the data written to it, but just the modified one. For the first time it will copy all the data written on the volume and not copy the blank / empty blocks. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html ** DONE windows EBS question CLOSED: [2015-05-04 Mon 16:23] http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/device_naming.html By default, when an EBS volume is attached to a Windows instance, it may show up as any drive letter on the instance. You can change the settings of the _____ Service to set the drive letters of the EBS volumes per your specifications. A EBSConfig Service B AMIConfig Service C Ec2Config Service D Ec2-AMIConfig Service http://surajbatuwana.blogspot.com.au/p/aws-certification-sample-questions.html

C ** TODO [#A] Key difference between General Purpose (SSD) Storage and Provisioned IOPS (SSD) Storage :IMPORTANT: ** DONE [#A] Feature: General Purpose (SSD) volumes deliver a ratio of 3 IOPS per GB CLOSED: [2015-05-06 Wed 11:50] http://aws.amazon.com/ec2/faqs/ General Purpose (SSD) volumes deliver a ratio of 3 IOPS per GB, offer single digit millisecond latencies, and also have the ability to burst up to 3000 IOPS for short periods. ** DONE EBS data backup CLOSED: [2015-05-06 Wed 11:57] Your organisation is in the business of architecting complex transactional databases and this for a variety of reasons has been done on EBS What is AWS's recommendation for customers who have architected databases using EBS for backups?

Backups to Amazon Glacier be performed through the database management system. Backups to Amazon S3 be performed through the database management system . If you take regular snapshots no further backups are required. Backups to AWS Storage Gateway be performed through the database management system.

B

Data stored in Amazon EBS volumes is redundantly stored in multiple physical locations as part of normal operation of those services and at no additional charge. However, Amazon EBS replication is stored within the same availability zone, not across multiple zones; therefore, it is highly recommended that you conduct regular snapshots to Amazon S3 for long-term data durability. For customers who have architected complex transactional databases using EBS, it is recommended that backups to Amazon S3 be performed through the database management system so that distributed transactions and logs can be checkpointed. AWS does not perform backups of data that are maintained on virtual disks attached to running instances on Amazon EC2. http://d0.awsstatic.com/whitepapers/Security/AWS%20Security%20Whitepaper.pdf ** TODO [#B] Concept: IOPS: input/output operations per second. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-io-characteristics.html Amazon EBS measures each I/O operation per second (that is 256 KB or smaller) as one IOPS.

I/O operations that are larger than 256 KB are counted in 256 KB capacity units. For example, a 1,024 KB I/O operation would count as 4 IOPS. ** DONE [#B] EBS snapshot security question CLOSED: [2015-05-09 Sat 11:20] A user is planning to schedule a backup for an EBS volume. The user wants security of the snapshot data. How can the user achieve data encryption with a snapshot?

By default the snapshot is encrypted by AWS While creating a snapshot select the snapshot with encryption Enable server side encryption for the snapshot using S3 Use encrypted EBS volumes so that the snapshot will be encrypted by AWS

D AWS EBS supports encryption of the volume. It also supports creating volumes from existing snapshots provided the snapshots are created from encrypted volumes. The data at rest, the I/O as well as all the snapshots of the encrypted EBS will also be encrypted. EBS encryption is based on the AES-256 cryptographic algorithm, which is the industry standard.

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html

  • [#A] Amazon ELB: Load balancer :noexport: Limiation | Name | Comment | |---------------------+--------------------------------------------------------------------------| | ELB count | You can create up to twenty (20) Elastic Load Balancers per region. | | SSL Certificates | The number of SSL certificates supported by an ELB at a given time is 1 | | Connection timeout | Default idle timeout for Load Balancer is 60 seconds. | | Connection Draining | Default timeout is 300 seconds. Maximum can be set between 1s and 1 hour |

X-Forwarded Headers | Name | Comment | |-------------------+---------------------------------------------------------------------------------| | X-Forwarded-For | X-Forwarded-For: OriginatingClientIPAddress, proxy1-IPAddress, proxy2-IPAddress | | X-Forwarded-Proto | X-Forwarded-Proto: originatingProtocol | | X-Forwarded-Port | |

  • ELB charges for 2 factors: hours and data processed
  • Load balancers can span multiple Availability Zones within an EC2 Region, but they cannot span multiple regions. ** TODO [#B] Does internal load balancer need to take public requests http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-internal-load-balancers.html

Public DNS Name for Your Load Balancer

When an internal load balancer is created, it receives a public DNS name with the following form:

internal-name-123456789.region.elb.amazonaws.com The DNS servers resolve the DNS name of your load balancer to the private IP addresses of the load balancer nodes for your internal load balancer. Each load balancer node is connected to the private IP addresses of the back-end instances that are in its Availability Zone using elastic network interfaces. ** DONE [#A] ELB supports HTTPS/SSL: What's the difference between HTTPS and SSL :IMPORTANT: CLOSED: [2015-05-02 Sat 12:28] http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-listener-config.html The HTTPS uses the SSL protocol to establish secure connections over the HTTP layer.

You can also use SSL protocol to establish secure connections over the TCP layer.

--8<-------------------------- separator ------------------------>8--

ou have just set up your first Elastic Load Balancer(ELB) but it does not seem to be configured properly. You discover that before you start using ELB, you have to configure the listeners for your load balancer. Which protocols does ELB use to support the load balancing of applications?

HTTP, HTTPS and TCP HTTP, HTTPS , TCP, SSL and SSH HTTP, HTTPS , TCP, and SSL HTTP and HTTPS

C

Before you start using Elastic Load Balancing(ELB), you have to configure the listeners for your load balancer. A listener is a process that listens for connection requests. It is configured with a protocol and a port number for front-end (client to load balancer) and back-end (load balancer to back-end instance) connections. Elastic Load Balancing supports the load balancing of applications using HTTP, HTTPS (secure HTTP), TCP, and SSL (secure TCP) protocols. The HTTPS uses the SSL protocol to establish secure connections over the HTTP layer. You can also use SSL protocol to establish secure connections over the TCP layer. The acceptable ports for both HTTPS/SSL and HTTP/TCP connections are 25, 80, 443, 465, 587, and 1024-65535.

http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-listener-config.html ** DONE [#A] ELB only works in one region and for one VPC CLOSED: [2015-05-02 Sat 12:55] http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/TerminologyandKeyConcepts.html Load balancers can span multiple Availability Zones within an EC2 Region, but they cannot span multiple regions. *** question An organization is setting up an application on AWS to have both High Availabilty (HA) and Disaster Recovery (DR). The organization wants to have both Recovery point objective (RPO) and Recovery time objective (RTO) of 10 minutes. Which of the below mentioned service configurations does not help the organization achieve the said RPO and RTO?

Use an AMI copy to keep the AMI available in other regions. Create ELB with multi- region routing to allow automated failover when required. Use an elastic IP to assign to a running instance and use Route 53 to map the user's domain with that IP. Take a snapshot of the data every 10 minutes and copy it to the other region.

B

AWS provides an on demand, scalable infrastructure. AWS EC2 allows the user to launch On-Demand instances and the organization should create an AMI of the running instance. Copy the AMI to another region to enable Disaster Recovery (DR) in case of region failure. The organization should also use EBS for persistent storage and take a snapshot every 10 minutes to meet Recovery time objective (RTO). They should also setup an elastic IP and use it with Route 53 to route requests to the same IP. When one of the instances fails the organization can launch new instances and assign the same EIP to a new instance to achieve High Availabilty (HA). The ELB works only for a particular region and does not route requests across regions.

http://d36cz9buwru1tt.cloudfront.net/AWS_Disaster_Recovery.pdf ** DONE [#A] Feature: ELB Cross zone load balancing CLOSED: [2015-05-04 Mon 16:57] http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/how-elb-works.html If the EC2 instances count is imbalanced across the AZ, the load balancer begins to route traffic equally amongst all the enabled Availability Zones irrespective of the instance count in each zone.

  • To ensure that your back-end instances are able to handle the request load in each Availability Zone, it is important to keep approximately the same number of instances in each Availability Zone registered with the load balancer. For example, if you have ten instances in Availability Zone us-west-2a and two instances in us-west-2b, the traffic is equally distributed between the two Availability Zones. As a result, the two instances in us-west-2b serve the same amount of traffic as the ten instances in us-west-2a. Instead, you should distribute your instances so that you have six instances in each Availability Zone.

  • If cross-zone load balancing is disabled, the load balancer node selects the instance from the same Availability Zone that it is in.

  • If cross-zone load balancing is enabled, the load balancer node selects the instance regardless of Availability Zone. *** question A user is configuring ELB. Which of the below mentioned options allows the user to route traffic to all instances irrespective of the AZ instance count?

Multi zone routing Across zone load balancing Round Robin Cross zone load balancing

D Elastic Load Balancing provides the option to either enable or disable cross-zone load balancing for the load balancer. With cross-zone load balancing, the load balancer nodes route traffic to the back-end instances across all the Availability Zones.

http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-disable-crosszone-lb.html *** question An ELB has 8 instances registered with it. 4 instances are running in one AZ, while 2 each are running in two separate AZs. By default, when a user request arrives how will ELB distribute the load?

Distributing requests across all instances equally Distributing requests across all AZs equally irrespective of the instance count in each AZ The AZ with a higher instance will have more requests than others The new request will go to the higher instance count AZ while the old requests will go to AZs with a lower number of instances

B

If the EC2 instances count is imbalanced across the AZ, the load balancer begins to route traffic equally amongst all the enabled Availability Zones irrespective of the instance count in each zone. If the user wants to distribute traffic equally amongst all the instances, the user needs to enable cross zone load balancing. http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/how-elb-works.html *** question Which of the below mentioned options ensures that the user requests are always attached to a single instance?

Session cookie Session affinity Cross zone load balancing Connection drainage

B

Generally an Elastic Load Balancer routes each request independently to the application instance with the smallest load. However, the user can enable the sticky session feature (also known as session affinity) which enables the load balancer to bind a user's session to a specific application instance. This ensures that all requests coming from the user during the session will be sent to the same application instance.

http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-sticky-sessions.html ** DONE [#A] scale up for load balancer itself CLOSED: [2015-05-02 Sat 23:50] http://aws.amazon.com/articles/1636185810492479 Scaling Elastic Load Balancers Once you create an elastic load balancer, you must configure it to accept incoming traffic and route requests to your EC2 instances. These configuration parameters are stored by the controller, and the controller ensures that all of the load balancers are operating with the correct configuration. The controller will also monitor the load balancers and manage the capacity that is used to handle the client requests. It increases capacity by utilizing either larger resources (resources with higher performance characteristics) or more individual resources. The Elastic Load Balancing service will update the Domain Name System (DNS) record of the load balancer when it scales so that the new resources have their respective IP addresses registered in DNS. The DNS record that is created includes a Time-to-Live (TTL) setting of 60 seconds, with the expectation that clients will re-lookup the DNS at least every 60 seconds. By default, Elastic Load Balancing will return multiple IP addresses when clients perform a DNS resolution, with the records being randomly ordered on each DNS resolution request. As the traffic profile changes, the controller service will scale the load balancers to handle more requests, scaling equally in all Availability Zones.

Your load balancer will also perform health checks on the EC2 instances that are registered with the Elastic Load Balancing service. The health checks must reach the defined target set in the Elastic Load Balancing configuration for the number of successful checks before the instance is considered in service and healthy. For example, for any instance registered with Elastic Load Balancing, if you set the interval for health checks to 20 seconds, and you set the number of successful health checks to 10, then it will take at least 200 seconds before Elastic Load Balancing will route traffic to the instance. The health check also defines a failure threshold. For example, if you set the interval to 20 seconds, and you set the failure threshold at 4, then when an instance no longer responds to requests, at least 80 seconds will elapse before it is taken out of service. However, if an instance is terminated, traffic will no longer be sent to the terminated instance, but there can be a delay before the load balancer is aware that the instance was terminated. For this reason, it is important to de-register your instances before terminating them; instances will be removed from service in a much shorter amount of time if they are de-registered. ** DONE [#A] Feature: connection draining CLOSED: [2015-05-03 Sun 02:39] http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/TerminologyandKeyConcepts.html

Connection Draining allows existing requests to complete before the load balancers shift traffic away from a deregistered or unhealthy back-end instances.

When you enable connection draining for your load balancer, you can set a maximum time for the load balancer to continue serving in-flight requests to the deregistering instance before the load balancer closes the connection. The load balancer forcibly closes connections to the deregistering instance when the maximum time limit is reached.

--8<-------------------------- separator ------------------------>8--

*** question1 A user has created an ELB with Auto Scaling. Which of the below mentioned offerings from ELB helps the user to stop sending new requests traffic from the load balancer to the EC2 instance when the instance is being deregistered while continuing in-flight requests?

ELB connection draining ELB deregistration check ELB auto registration Off ELB sticky session

A

The Elastic Load Balancer connection draining feature causes the load balancer to stop sending new requests to the back-end instances when the instances are deregistering or become unhealthy, while ensuring that in-flight requests continue to be served.

http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/config-conn-drain.html *** question2 A user has setup connection draining with ELB to allow in-flight requests to continue while the instance is being deregistered through Auto Scaling. If the user has not specified the draining time, how long will ELB allow in-flight requests traffic to continue?

300 seconds 0 seconds 600 seconds 3600 seconds

A

The Elastic Load Balancer connection draining feature causes the load balancer to stop sending new requests to the back-end instances when the instances are deregistering or become unhealthy, while ensuring that in-flight requests continue to be served. The user can specify a maximum time (3600 seconds) for the load balancer to keep the connections alive before reporting the instance as deregistered. If the user does not specify the maximum timeout period, by default, the load balancer will close the connections to the deregistering instance after 300 seconds.

http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/config-conn-drain.html ** # --8<-------------------------- separator ------------------------>8-- ** DONE ELB feature question CLOSED: [2015-05-02 Sat 12:20] Which of the following is NOT TRUE of AWS Elastic Load Balancing? Helps setup security groups in AWS Virtual Private Cloud Supports SSL termination Does not support IPv6 Detects health of EC2 Instances

C

http://aws.amazon.com/elasticloadbalancing/details/ IPv6 support is currently unavailable for use in VPC. ** DONE ELB is not free CLOSED: [2015-05-02 Sat 12:23] A user is trying to save some cost on the AWS services. Which of the below mentioned options will not help him save cost?

Delete the AutoScaling launch configuration after the instances are terminated Delete the unutilized EBS volumes once the instance is terminated Release the elastic IP if not required once the instance is terminated Delete the AWS ELB after the instances are terminated

A ** # --8<-------------------------- separator ------------------------>8-- ** DONE ELB SSL certificates question CLOSED: [2015-05-02 Sat 13:03] http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-listener-config.html If you use HTTPS or SSL for your front-end listener, you must install an SSL certificate on your load balancer.

Before you can install the SSL certificate on your load balancer, you must create the certificate, get the certificate signed by a CA, and then upload the certificate using the AWS Identity and Access Management (AWS IAM) service.

The number of SSL certificates supported by an ELB at a given time is 0 2 1 3

C ** DONE The health check is internally performed by ELB and does not help the admin get the ELB activity. CLOSED: [2015-05-02 Sat 13:03] An admin is planning to monitor the ELB. Which of the below mentioned services does not help the admin capture the monitoring information about the ELB activity?

ELB API calls with CloudTrail CloudWatch metrics ELB health check ELB Access logs

C The admin can capture information about Elastic Load Balancer using either: CloudWatch Metrics ELB Logs files which are stored in the S3 bucket CloudTrail with API calls which can notify the user as well generate logs for each API calls The health check is internally performed by ELB and does not help the admin get the ELB activity. ** DONE When the user deletes the Elastic Load Balancer, all the registered instances will be deregistered, not terminated CLOSED: [2015-05-02 Sat 13:11] A user has launched an ELB which has 5 instances registered with it. The user deletes the ELB by mistake. What will happen to the instances?

ELB cannot be deleted if it has running instances registered with it ELB will ask the user whether to delete the instances or not Instances will be terminated Instances will keep running

D

When the user deletes the Elastic Load Balancer, all the registered instances will be deregistered. However, they will continue to run. The user will incur charges if he does not take any action on those instances. http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/US_EndLoadBalancing02.html ** DONE ELB send metrics to CloudWatch with no extra cost CLOSED: [2015-05-03 Sun 02:47] Elastic Load Balancing includes 10 metrics and 2 dimensions, and sends data to CloudWatch every minute. This does not cost extra.

A user has configured an Auto Scaling group with ELB. The user has enabled detailed CloudWatch monitoring on Elastic Load balancing. Which of the below mentioned statements will help the user understand this functionality better?

ELB will send data every minute and will charge the user extra ELB is not supported by CloudWatch It is not possible to setup detailed monitoring for ELB ELB sends data to CloudWatch every minute only and does not charge the user

D CloudWatch is used to monitor AWS as well as the custom services. It provides either basic or detailed monitoring for the supported AWS products. In basic monitoring, a service sends data points to CloudWatch every five minutes, while in detailed monitoring a service sends data points to CloudWatch every minute. Elastic Load Balancing includes 10 metrics and 2 dimensions, and sends data to CloudWatch every minute. This does not cost extra. http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/supported_services.html

Dimensions for Elastic Load Balancing Metrics: LoadBalancerName, AvailabilityZone ** DONE ELB pre-warming question CLOSED: [2015-05-03 Sun 07:49] As an alternative to prewarming the ELB, we can assign a smaller ELB to load balance between multiple ELBs. True False

B http://aws.amazon.com/articles/1636185810492479

Amazon ELB is able to handle the vast majority of use cases for our customers without requiring "pre-warming" (configuring the load balancer to have the appropriate level of capacity based on expected traffic). ** # --8<-------------------------- separator ------------------------>8-- ** DONE sticky session: "Duration-Based Session Stickiness" VS "Application-Controlled Session Stickiness" CLOSED: [2015-05-04 Mon 17:11] http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-sticky-sessions.html

  • Duration-Based Session Stickiness If there is no cookie, the load balancer chooses an instance based on the existing load balancing algorithm.

    The stickiness policy configuration defines a cookie expiration, which establishes the duration of validity for each cookie. The cookie is automatically updated after its duration expires.

  • Application-Controlled Session Stickiness If the application cookie is explicitly removed or expires, the session stops being sticky until a new application cookie is issued. ** DONE ELB sticky session: what if unhealthy instances come back to normal again CLOSED: [2015-05-04 Mon 17:12] A user has enabled Application-Controlled Session Stickiness. The instance the user's session was bound to becomes unhealthy for 10 minutes. The unhealthy instance becomes healthy again after 10 minutes. What will ELB do in this case?

ELB will route requests to the original instance again ELB will try to route the sticky session to another healthy application server The unhealthy instance will never be registered back with ELB once it has been declared unhealthy ELB will throw an error

B

If an application server fails or is removed, the Elastic Load Balancer will try to route the sticky session to another healthy application server. The load balancer will try to stick to a new healthy application server and continue routing to the currently sick application server even after the failed application server comes back. However, it is up to the new application server on how it will respond to a request which it has not seen previously. http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-sticky-sessions.html ** TODO [#A] ELB security group :IMPORTANT: A user has created an ELB with three instances. How many security groups will ELB create by default?

2 5 3 1

A

Elastic Load Balancing provides a special Amazon EC2 source security group that the user can use to ensure that back-end EC2 instances receive traffic only from Elastic Load Balancing. This feature needs two security groups: the source security group and a security group that defines the ingress rules for the back-end instances. To ensure that traffic only flows between the load balancer and the back-end instances, the user can add or modify a rule to the back-end security group which can limit the ingress traffic. Thus, it can come only from the source security group provided by Elastic load Balancing. http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/using-elb-security-groups.html

  • [#A] Amazon Route53: Scalable DNS and Domain Name Registration :noexport:
  • Private DNS is a Route 53 feature that lets you have authoritative DNS within your VPCs without exposing your DNS records ** TODO [#A] Why we need Rout53, instead of Godaddy? ** DONE [#A] Alias type VS cname type CLOSED: [2015-04-21 Tue 17:28] https://aws.amazon.com/route53/faqs/ Alias records work like a CNAME record in that you can map one DNS name (example.com) to another 'target' DNS name (elb1234.elb.amazonaws.com). They differ from a CNAME record in that they are not visible to resolvers.

Additionally, Route 53 offers 'Alias' records (a Route 53-specific virtual record). Alias records are used to map resource record sets in your hosted zone to Elastic Load Balancing load balancers, CloudFront distributions, or S3 buckets that are configured as websites. Alias records work like a CNAME record in that you can map one DNS name (example.com) to another 'target' DNS name (elb1234.elb.amazonaws.com). They differ from a CNAME record in that they are not visible to resolvers. Resolvers only see the A record and the resulting IP address of the target record. ** DONE [#A] Why to choose DNSPot, Route53, Godaddy: 域名解析的失败率更低 CLOSED: [2015-04-21 Tue 15:44] http://amix.dk/blog/post/19712 http://serverfault.com/questions/216330/why-should-i-use-amazon-route-53-over-my-registrars-dns-servers http://www.cnblogs.com/huang0925/p/3684348.html http://segmentfault.com/q/1010000000200300 http://www.aws-faq.com/featured/网站如何用上-route-53-服务.html https://www.similartech.com/compare/amazon-route-53-vs-dnspod

  • Most registrar DNS services are provided for free for a domain you purchased and do not usually come with an SLA.

  • Route53: Having your DNS resolve from 15+ locations worldwide makes your website a little bit faster for your end users. It also allows you to use a lower TTL, which means in case of a website failure, you can move your service over to a new IP faster. ** DONE [#A] DNS TTL: 决定了你的域名解析记录在每个DNS服务器上的存在时间. CLOSED: [2015-04-21 Tue 15:34] 这里DNS服务器包括你域名设置的两个DNS服务器,以及其它各网络提供商及机构提供的DNS服务器. ** DONE [#B] How quickly will changes I make to my DNS settings on Amazon Route 53 propagate globally? CLOSED: [2015-04-21 Tue 16:52] https://aws.amazon.com/route53/faqs/ Q. How quickly will changes I make to my DNS settings on Amazon Route 53 propagate globally?

Amazon Route 53 is designed to propagate updates you make to your DNS records to its world-wide network of authoritative DNS servers within 60 seconds under normal conditions. A change is successfully propagated world-wide when the API call returns an INSYNC status listing.

Note that caching DNS resolvers are outside the control of the Amazon Route 53 service and will cache your resource record sets according to their time to live (TTL). The INSYNC or PENDING status of a change refers only to the state of Route 53's authoritative DNS servers. ** DONE export AWS route53 records CLOSED: [2015-04-27 Mon 15:59] http://serverfault.com/questions/535631/how-to-export-a-hosted-zone-in-aws-route-53 cli53 export --full sciworth.com ** DONE route 53 question CLOSED: [2015-04-28 Tue 11:06] AWS Certified SysOps Administrator Associate Practice Exam Time Remaining: 04:11 5 of 20. An enterprise customer would like to host some of their corporate servers in Amazon Web Services and have chosen to create a VPC deployment as an extension of their on-premises data center. The existing on-premises data center already has multiple redundant DNS servers. These DNS servers are hosting DNS records for several internal applications, such as mail servers and business applications. The corporate security policy specifies that the DNS names for the internal applications can only be resolved within the secure internal corporate network through the on-premises DNS server and cannot be resolved over the public internet or through a Public DNS server. The on-premises DNS servers can resolve Internet domain names through recursion. Secure network connectivity between the VPC and the on-premises data center has been established using IPSec VPN.

The Amazon Elastic Compute Cloud (EC2) instances that are launched within this VPC will host applications that frequently connect to the corporate applications hosted in the on-premises data center. The enterprise policy mandates the use of domain names to connect to all internal applications.

Select the option that is most effective for building a scalable DNS architecture:

A. Create a new Route 53 hosted zone for the internal domain and add all internal domain names as record sets in the Route 53 hosted-zone. Modify the default DHCP option set for the VPC to specify the domain name server value as Route 53. B. Create a new Route 53 hosted-zone for the internal domain name, and configure Route 53 to forward all DNS queries for internal domain names to the on-premises DNS servers. Modify the default DHCP option set for the VPC to specify the domain name server value as Route 53. C. Create a new DHCP option set that specifies the domain name server value as the on-premises DNS servers. Replace the default DHCP option set for the VPC with the newly created DHCP option set. D. Create two DHCP options sets, DHCPSetA and DHCPSetB. Configure DHCPSetA to specify the Amazon-provided DNS server as the domain name server to resolve all Internet domain names. Configure DHCPSetB to specify the on-premises DNS server as the domain name server to resolve all internal domain names. Apply both the DHCP options set to the VPC so that both Internet domain names and internal domain names can be resolved. Mark this item for later review.

2015 KRYTERION, Inc. and KRYTERION, Limited - All Rights Reserved.

B ** DONE [#B] Route53: Standard Queries, Latency Based Routing Queries and Geo DNS Queries CLOSED: [2015-04-30 Thu 12:51] http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html

https://aws.amazon.com/blogs/aws/route-53-domain-reg-geo-route-price-drop/

https://aws.amazon.com/route53/faqs/ Q. What is Amazon Route 53's Latency Based Routing (LBR) feature? http://www.slideshare.net/AmazonWebServices/route-53-latency-based-routing

| Routing Policy | Summary | |-----------------------------+--------------------------------------------------------------------| | Simple Routing Policy | | | Weighted Routing Policy | | | Latency Based Routing (LBR) | route end users to the endpoint providing lowest latency | | Failover Routing Policy | | | Geolocation Routing Policy | specify geographic locations by continent, by country, or by state | ** DONE Latency Based Routing (LBR) CLOSED: [2015-04-30 Thu 13:19] http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html http://yourstory.com/2012/08/understanding-latency-based-routing-of-amazon-route-53-part-1/ Useful when application is hosted in multiple AWS region

Benefits for LBR:

  • Better performance than running in a single region
  • Improved reliability relative to running in a single region
  • Easier implementation than traditional DNS solutions
  • Much lower prices than traditional DNS solutions

For example, suppose you have ELB load balancers in the US West (Oregon) region and in the Asia Pacific (Singapore) region, and that you've created a latency resource record set in Amazon Route 53 for each load balancer. A user in London enters the name of your domain in a browser, and DNS routes the request to an Amazon Route 53 name server. Amazon Route 53 refers to its data on latency between London and the Singapore region and between London and the Oregon region. If latency is lower between London and the Oregon region, Amazon Route 53 responds to the user's request with the IP address of your load balancer in the Amazon EC2 data center in Oregon. If latency is lower between London and the Singapore region, Amazon Route 53 responds with the IP address of your load balancer in the Amazon EC2 data center in Singapore. ** TODO [#B] what is zone apex ** TODO [#A] What's zone apex? http://aws.amazon.com/ec2/faqs/ Q: Can I load balance traffic to the zone apex of my domain (e.g., http://example.com)?

Yes. Please refer to the Elastic Load Balancing Developer Guide for more information.

  • [#A] Amazon Storage Gateway :noexport: http://aws.amazon.com/storagegateway/
  • Amazon Storage Gateway: Integrate On-Premises IT env with Cloud Storage
  • The service enables you to securely store data to the AWS cloud for scalable and cost-effective storage.
  • To use it, need first download VM image and boot up as a VM
  • Storage can be mounted as iSCSI devices by your on-premises applications.

| Type | Summary | |------------------------------+--------------------------------------------------------------------------------------| | Gateway-Cached Volumes | store your primary data in Amazon S3 | | Gateway-Stored Volumes | store primary data locally and asynchronously back up point-in-time snapshots to S3. | | Gateway-Virtual Tape Library | you can have a limitless collection of virtual tapes backed by S3 or Glacier |

https://www.youtube.com/watch?v=Bb8nk0oWJbU Getting Started with Gateway-Cached volumes on AWS Storage Gateway

http://aws.amazon.com/storagegateway/details/ ** How to use Amazon Storage Gateway AWS Storage Gateway's software appliance is available for download as a virtual machine (VM) image that you install on a host in your datacenter. Once you've installed your gateway and associated it with your AWS account through our activation process, you can use the AWS Management Console to create either gateway-cached or gateway-stored volumes that can be mounted as iSCSI devices by your on-premises applications.

Procedure:

  • Provision a Host Provision a host in your datacenter to deploy the gateway virtual machine (VM).

  • Download and Deploy the VM Download the VM and deploy it to your local host.

  • Provision Local Disk Storage Allocate disks to your deployed VM for low-latency on-premises access to your application data and to temporarily buffer writes before your data is uploaded to AWS.

  • Activate Your Gateway Activate your gateway and select an AWS Region to store your uploaded data. ** Gateway-cached volumes Gateway-cached volumes allow you to utilize Amazon S3 for your primary data, while retaining some portion of it locally in a cache for frequently accessed data.

You can create storage volumes up to 32 TBs in size

Data written to these volumes is stored in Amazon S3, with only a cache of recently written and recently read data stored locally on your on-premises storage hardware. ** Gateway-stored volumes store your primary data locally, while asynchronously backing up that data to AWS. You can create storage volumes up to 1 TB in size and mount them as iSCSI devices from your on- premises application servers ** DONE Choose Storage service Question CLOSED: [2015-05-03 Sun 19:42] You have been given a scope to set up an AWS Media Sharing Framework for a new start up photo sharing company similar to flickr. The first thing that comes to mind about this is that it will obviously need a huge amount of persistent data storage for this framework. Which of the following storage options would be appropriate for persistent storage?

AWS Import/Export or AWS Storage Gateway Amazon Glacier or Amazon S3 Amazon EBS volumes or Amazon S3 Amazon Glacier or AWS Import/Export

C

Persistent storage-If you need persistent virtual disk storage similar to a physical disk drive for files or other data that must persist longer than the lifetime of a single Amazon EC2 instance, Amazon EBS volumes or Amazon S3 are more appropriate. http://media.amazonwebservices.com/AWS_Storage_Options.pdf ** DONE choose Storage service Question CLOSED: [2015-05-03 Sun 19:42] You have been asked to set up a database in AWS that will require frequent and granular updates. You know that you will require a reasonable amount of storage space but are not sure of the best option. What is the recommended storage option when you run a database on an instance with the above criteria?

AWS Storage Gateway Amazon S3 Amazon EBS Amazon Glacier

C

Amazon EBS provides durable, block-level storage volumes that you can attach to a running Amazon EC2 instance. You can use Amazon EBS as a primary storage device for data that requires frequent and granular updates. For example, Amazon EBS is the recommended storage option when you run a database on an instance.

  • [#A] Amazon AutoScaling :noexport:IMPORTANT: Limitation | Name | Comment | |------------------------------+--------------------------------------------------------------------------------------| | Scheduled scale | As per Auto Scaling the user can schedule an action for up to a month in the future. | | Scheduled actions count | up to of 125 scheduled actions per month per Auto Scaling group. | | Default cooldown period | The default amount of time is 300 seconds. | |------------------------------+--------------------------------------------------------------------------------------| | Auto Scaling groups limits | 20 | | Launch configurations limits | 100 | ** DONE [#A] AutoScaling Termination Policy CLOSED: [2015-05-04 Mon 09:08] http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AutoScalingBehavior.InstanceTermination.html The default termination policy:
  1. Auto Scaling determines whether there are instances in multiple Availability Zones. If so, it selects the Availability Zone with the most instances. If there is more than one Availability Zone with this number of instances, Auto Scaling selects the Availability Zone with the instances that use the oldest launch configuration.

  2. Auto Scaling determines which instances in the selected Availability Zone use the oldest launch configuration. If there is one such instance, it terminates it.

  3. If there are multiple instances that use the oldest launch configuration, Auto Scaling determines which instances are closest to the next billing hour. (This helps you maximize the use of your EC2 instances while minimizing the number of hours you are billed for Amazon EC2 usage.) If there is one such instance, Auto Scaling terminates it.

  4. If there is more than one instance closest to the next billing hour, Auto Scaling selects one of these instances at random.

Customize Termination Policy | Policy | Summary | |---------------------------+---------| | OldestInstance | | | NewestInstance | | | OldestLaunchConfiguration | | | ClosestToNextInstanceHour | | | Default | | *** question1 A user has defined an AutoScaling termination policy to first delete the instance with the nearest billing hour. AutoScaling has launched 3 instances in the US-East-1A region and 2 instances in the US-West-1B region. One of the instances in the US-East-1B region is running nearest to the billing hour. Which instance will AutoScaling terminate first while executing the termination action?

Instance with the nearest billing hour in US-East-1A Random instance from US-East-1B Instance with the nearest billing hour in US-East-1B Random Instance from US-East-1A

A A user has defined an AutoScaling termination policy to first delete the instance with the nearest billing hour. AutoScaling has launched 3 instances in the US-East-1A region and 2 instances in the US-West-1B region. One of the instances in the US-East-1B region is running nearest to the billing hour. Which instance will AutoScaling terminate first while executing the termination action?

Instance with the nearest billing hour in US-East-1A Random instance from US-East-1B Instance with the nearest billing hour in US-East-1B Random Instance from US-East-1A http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AutoScalingBehavior.InstanceTermination.html *** question2 A user has defined an AutoScaling termination policy to first delete the oldest instance. AutoScaling has launched 2 instances in the US-East-1A region and 2 instances in the US-West-1B region. One of the instances in the US-East-1B region is running nearest to the billing hour while the instance in the US-East-1A region is the oldest one. Which instance will AutoScaling terminate first while executing the termination action?

Deletes the instance from US-East-1B which is nearest to the running hour Deletes the oldest instance from US-EAST-1A Deletes the oldest instance from US-EAST-1AB Randomly selects the AZ and then terminates the oldest instance

D

Even though the user has configured the termination policy, before AutoScaling selects an instance to terminate, it first identifies the Availability Zone that has more instances than the other Availability Zones used by the group. If both the zones have the same instance count it will select the zone randomly. Within the selected Availability Zone, it identifies the instance that matches the specified termination policy. In this case it will identify the AZ randomly and then first delete the oldest instance from that zone which matches the termination policy. http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AutoScalingBehavior.InstanceTermination.html ** DONE [#A] cooldown period CLOSED: [2015-05-04 Mon 09:35] http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/Cooldown.html

Now a spike in traffic occurs, causing the CloudWatch alarm to fire. When it does, Auto Scaling launches an instance to help handle the increase in demand. However, there's a problem: the instance takes a couple of minutes to launch. During that time, the CloudWatch alarm could continue to fire, resulting in Auto Scaling launch another instance each time the alarm goes off. ** DONE [#A] Auto Scale scheduled events conflict CLOSED: [2015-05-02 Sat 15:56] http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/schedule_time.html A scheduled action must have a unique time value.

If you attempt to schedule an activity at a time when another existing activity is already scheduled, the call is rejected with an error message noting the conflict.

A user is trying to setup a recurring Auto Scaling process. The user has setup one process to scale up every day at 8 am and scale down at 7 PM. The user is trying to setup another recurring process which scales up on the 1st of every month at 8 AM and scales down the same day at 7 PM. What will Auto Scaling do in this scenario?

Auto Scaling will throw an error since there is a conflict in the schedule of two separate Auto Scaling processes Auto Scaling will schedule both the processes but execute only one process randomly Auto Scaling will execute both processes but will add just one instance on the 1st Auto Scaling will add two instances on the 1st of the month

A

Auto Scaling based on a schedule allows the user to scale the application in response to predictable load changes. The user can also configure the recurring schedule action which will follow the Linux cron format. As per Auto Scaling, a scheduled action must have a unique time value. If the user attempts to schedule an activity at a time when another existing activity is already scheduled, the call will be rejected with an error message noting the conflict.

http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/schedule_time.html ** DONE [#A] manual scale: change DesiredCapacity CLOSED: [2015-05-04 Mon 09:35] http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-manual-scaling.html *** DONE Autoscale scale up question CLOSED: [2015-05-04 Mon 09:35] A sys admin is maintaining an application on AWS. The application is installed on EC2 and user has configured ELB and Auto Scaling. Considering future load increase, the user is planning to launch new servers proactively so that they get registered with ELB. How can the user add these instances with Auto Scaling?

Increase the desired capacity of the Auto Scaling group Increase the maximum limit of the Auto Scaling group Launch an instance manually and register it with ELB on the fly Decrease the minimum limit of the Auto Scaling group

A A user can increase the desired capacity of the Auto Scaling group and Auto Scaling will launch a new instance as per the new capacity. The newly launched instances will be registered with ELB if Auto Scaling group is configured with ELB. If the user decreases the minimum size the instances will be removed from Auto Scaling. Increasing the maximum size will not add instances but only set the maximum instance cap. http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-manual-scaling.html ** DONE [#A] Dynamic Scaling CLOSED: [2015-05-04 Mon 09:59] http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-scale-based-on-demand.html | Name | Summary | |-------------------------+------------------------------------------------------| | ChangeInCapacity | Increases or decreases the existing capacity. | | ExactCapacity | Changes the current capacity to the specified value. | | PercentChangeInCapacity | Increases or decreases the capacity by a percentage. |

The user has configured AutoScaling based on the dynamic policy. Which of the following is not the right command to specify a change in capacity as a part of the policy?

"adjustment=-8" (type is ExactCapacity) "adjustment=-1" (type is ChangeInCapacity) "adjustment=-50" (type is PercentChangeInCapacity) "adjustment=3" (type is ExactCapacity)

A

The user can configure the AutoScaling group to automatically scale up and then scale down based on the various specified CloudWatch monitoring conditions. The user needs to provide the adjustment value and the adjustment type. A positive adjustment value increases the current capacity and a negative adjustment value decreases the current capacity. The user can express the change to the current size as an absolute number, an increment or as a percentage of the current group size. In this option specifying the exact capacity with the adjustment value = -8 will not work as when type is exact capacity the adjustment value cannot be negative.

http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-scale-based-on-demand.html ** DONE [#A] Caculate PercentChangeInCapacity for Dynamic Scaling CLOSED: [2015-05-04 Mon 10:09] Auto Scaling handles non-integer numbers returned by PercentChangeInCapacity as follows:

  • If the value is greater than 1, Auto Scaling rounds it to the lower value. For example, a return value of 12.7 is rounded to 12.
  • If the value is between 0 and 1, Auto Scaling rounds it to 1. For example, a return value of .67 is rounded to 1.
  • If the value between 0 and -1, Auto Scaling rounds it to -1. For example, a return value of -.58 is rounded to -1.
  • If the value is less than -1, Auto Scaling rounds it to the higher value. For example, a return value of -6.67 is rounded to -6.

A user has configured AutoScaling with policy based scaling. The user has 53 instances running right now. The policy states that the count should decrease by 10%. How many instances will be running after the scaling activity is complete?

48 47 52 50

A AutoScaling rounds off the values returned by the PercentChangeInCapacity to the higher number if value is negative. If the current count is 53 and the policy gets executed, the total number of instances to be decreased will be calculated as 5.3 (10% of 53). AWS will round it off to 5 and terminate 5 more instances.

http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-scale-based-on-demand.html ** # --8<-------------------------- separator ------------------------>8-- ** DONE [#A] What processes Auto Scaling supports CLOSED: [2015-05-04 Mon 09:29] http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/US_SuspendResume.html | Name | Summary | |-------------------+---------| | Launch | | | Terminate | | | HealthCheck | | | ReplaceUnhealthy | | | AZRebalance | | | AlarmNotification | | | ScheduledActions | | | AddToLoadBalancer | | *** DONE Auto Scaling processes question CLOSED: [2015-05-04 Mon 08:19] A sys admin is trying to understand the Auto Scaling activities. Which of the below mentioned processes is not performed by Auto Scaling?

Availability Zone Balancing Replace Unhealthy Schedule Actions Reboot Instance

D

There are two primary types of Auto Scaling processes: Launch and Terminate, which launch or terminat instances, respectively. Some other actions performed by Auto Scaling are: AddToLoadbalancer, AlarmNotification, HealthCheck, AZRebalance, ReplaceUnHealthy, and ScheduledActions. http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/US_SuspendResume.html ** DONE Feature: lifecycle hooks CLOSED: [2015-05-04 Mon 16:49] http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/introducing-lifecycle-hooks.html

An Auto Scaling lifecycle hook allows you to add custom events to instances as they launch or terminate. ** DONE Delete Resource of Auto Scaling By CLI and web console is slightly different CLOSED: [2015-05-03 Sun 20:06] A user is trying to delete an Auto Scaling group from CLI. Which of the below mentioned steps are to be performed by the user?

There is no need to change the capacity. Run the as-delete-group command and it will reset all values to 0 Terminate the instances with the ec2-terminate-instance command Terminate the Auto Scaling instances with the as-terminate-instance command Set the minimum size and desired capacity to 0

D

If the user wants to delete the Auto Scaling group, the user should manually set the values of the minimum and desired capacity to 0. Otherwise Auto Scaling will not allow for the deletion of the group from CLI. While trying from the AWS console, the user need not set the values to 0 as the Auto Scaling console will automatically do so.

http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-process-shutdown.html ** DONE [#A] CloudWatch: Auto Scaling includes 7 metrics and 1 dimension CLOSED: [2015-05-03 Sun 07:53] A user has configured an Auto Scaling group with ELB. The user has enabled detailed CloudWatch monitoring on Auto Scaling. Which of the below mentioned statements will help the user understand the functionality better?

It is not possible to setup detailed monitoring for Auto Scaling Auto Scaling sends data every minute only and does not charge the user Detailed monitoring will send data every minute without additional charges In this case, Auto Scaling will send data every minute and will charge the user extra

D CloudWatch is used to monitor AWS as well as the custom services. It provides either basic or detailed monitoring for the supported AWS products. In basic monitoring, a service sends data points to CloudWatch every five minutes, while in detailed monitoring a service sends data points to CloudWatch every minute. Auto Scaling includes 7 metrics and 1 dimension, and sends data to CloudWatch every 5 minutes by default. The user can enable detailed monitoring for Auto Scaling, which sends data to CloudWatch every minute. However, this will have some extra-costs. http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/supported_services.html ** DONE Scheduled auto scale actions CLOSED: [2015-05-03 Sun 08:21] A user has created a web application with Auto Scaling. The user is regularly monitoring the application and he observed that the traffic is highest on Thursday and Friday between 8 AM to 6 PM. What is the best solution to handle scaling in this case?

Schedule a policy which may scale up every day at 8 AM and scales down by 6 PM Configure a batch process to add a instance by 8 AM and remove it by Friday 6 PM Schedule Auto Scaling to scale up by 8 AM Thursday and scale down after 6 PM on Friday Add a new instance manually by 8 AM Thursday and terminate the same by 6 PM Friday

C

Auto Scaling based on a schedule allows the user to scale the application in response to predictable load changes. In this case the load increases by Thursday and decreases by Friday. Thus, the user can setup the scaling activity based on the predictable traffic patterns of the web application using Auto Scaling scale by Schedule.

http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/schedule_time.html ** DONE Auto Scaling Dimensions and Metrics question CLOSED: [2015-05-04 Mon 08:15] A user has enabled detailed CloudWatch metric monitoring on an Auto Scaling group. Which of the below mentioned metrics will help the user identify the total number of instances in an Auto Scaling group including pending, terminating and running instances.

GroupSumInstances GroupInstancesCount It is not possible to get a count of all the three metrics together. The user has to find the individual number of running, terminating and pending instances and sum it GroupTotalInstances

D

CloudWatch is used to monitor AWS as well as the custom services. For Auto Scaling, CloudWatch provides various metrics to get the group information, such as the Number of Pending, Running or Terminating instances at any moment. If the user wants to get the total number of Running, Pending and Terminating instances at any moment, he can use the GroupTotalInstances metric. http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/as-metricscollected.html ** DONE AutoScaling healthcheck question CLOSED: [2015-05-04 Mon 08:17] After moving an E-Commerce website for a client from a dedicated server to AWS you have also set up auto scaling to perform health checks on the instances in your group and replace instances that fail these checks. Your client has come to you with his own health check system that he wants you to use as it has proved to be very useful prior to his site running on AWS. What do you think would be an appropriate response to this given all that you know about auto scaling?

It is not possible to implement your own health check system. You need to use AWSs health check system. It is possible to implement your own health check system and then send the instance's health information directly from your system to Auto Scaling. It is possible to implement your own health check system and then send the instance's health information directly from your system to Auto Scaling but only in the US East (N. Virginia) region. It is possible to implement your own health check system but we can't send the instance's health information directly from your system to Auto Scaling.

B

Auto Scaling periodically performs health checks on the instances in your group and replaces instances that fail these checks. By default, these health checks use the results of EC2 instance status checks to determine the health of an instance. If you use a load balancer with your Auto Scaling group, you can optionally choose to include the results of Elastic Load Balancing health checks. Auto Scaling marks an instance unhealthy if the calls to the Amazon EC2 action DescribeInstanceStatus returns any other state other than running, the system status shows impaired, or the calls to Elastic Load Balancing action DescribeInstanceHealth returns OutOfService in the instance state field. After an instance is marked unhealthy because of an Amazon EC2 or Elastic Load Balancing health check, it is scheduled for replacement. You can customize the health check conducted by your Auto Scaling group by specifying additional checks or by having your own health check system and then sending the instance's health information directly from your system to Auto Scaling.

http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/healthcheck.html ** DONE Auto Scaling: Difference between "create instance from scratch" and "create from existing instances" CLOSED: [2015-05-04 Mon 16:19] http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/create-lc-with-instanceID.html

Create from existing instances: ignores any additional block devices that were added to the instance after launch *** Question A user has launched an instance with an EBS backed root device and then attached two additional EBS volumes to it. The user is trying to create the AutoScaling Launch configuration using this instance as a reference.

How many additional block devices will the future instance have by default?

3 0 2 1

B When a user is creating a new launch configuration from the existing instance, AS will take all the parameters except the instance ID and config name from the existing instance. The launch config though, will not take the new block device mapping into consideration and instead use the one which was the default one with AMI. In this case there was no additional block device with EBS. Thus, the future EBS will have 0 additional EBS volumes http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/create-lc-with-instanceID.html ** DONE SSL Negotiation Configurations for Elastic Load Balancing CLOSED: [2015-05-02 Sat 14:19] A user has configured Elastic Load Balancing by enabling a Secure Socket Layer (SSL) negotiation configuration known as a Security Policy. Which of the below mentioned options is not part of this secure policy while negotiating the SSL connection between the user and the client?

SSL Protocols SSL Ciphers Client Order Preference Server Order Preference

C Elastic Load Balancing uses a Secure Socket Layer (SSL) negotiation configuration which is known as a Security Policy. It is used to negotiate the SSL connections between a client and the load balancer. A security policy is a combination of SSL Protocols, SSL Ciphers, and the Server Order Preference option. http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-ssl-security-policy.html ** DONE [#A] Auto Scaling can replace unhealthy CLOSED: [2015-05-08 Fri 22:43] http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/US_SuspendResume.html ReplaceUnhealthy

Terminates instances that are marked as unhealthy and subsequently creates new instances to replace them. This process works with the HealthCheck process, and uses both the Terminate and Launch processes.

--8<-------------------------- separator ------------------------>8--

A sys admin is trying to understand the Auto Scaling activities. Which of the below mentioned processes is not performed by Auto Scaling?

Reboot Instance Schedule Actions Availability Zone Balancing Replace Unhealthy

A

There are two primary types of Auto Scaling processes: Launch and Terminate, which launch or terminat instances, respectively. Some other actions performed by Auto Scaling are: AddToLoadbalancer, AlarmNotification, HealthCheck, AZRebalance, ReplaceUnHealthy, and ScheduledActions.

  • --8<-------------------------- separator ------------------------>8-- :noexport:

  • AWS OpsWorks: DevOps Application Management Service :noexport: http://aws.amazon.com/training/course-descriptions/devops-engineering/ Use AWS CloudFormation and AWS OpsWorks to deploy the infrastructure necessary to create development, test, and production environments for a software development project
  • You pay for on-premises servers supported by AWS OpsWorks by the hour ($0.02) ** TODO [#A] Difference among: AWS Elastic Beanstalk, AWS CloudFormation, AWS OpsWorks and AWS CodeDeploy :IMPORTANT: ** DONE [#B] Feature: load-based vs time-based CLOSED: [2015-05-06 Wed 12:50] http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-autoscaling.html

  • Time-based: instances that run only at certain times or on certain days.

  • Unlike 24/7 instances, which you must start and stop manually, you do not start or stop time-based or load-based instances yourself.

Automatic scaling is based on two instance types, which adjust a layer's online instances based on different criteria:

  • Load-based instances allow your stack to handle variable loads by starting additional instances when traffic is high and stopping instances when traffic is low, based on any of several load metrics. For example, you can have AWS OpsWorks start instances when the average CPU utilization exceeds 80% and stop instances when the average CPU load falls below 60%.

  • Time-based instances allow your stack to handle loads that follow a predictable pattern by including instances that run only at certain times or on certain days. For example, you could start some instances after 6PM to perform nightly backup tasks or stop some instances on weekends when traffic is lower. *** A common practice is to use all three instance types together, as follows. http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-autoscaling.html A common practice is to use all three instance types together, as follows.

  • A set 24/7 instances to handle the base load. You typically just start these instances and let them run continuously.

  • A set of time-based instances, which AWS OpsWorks starts and stops to handle predictable traffic variations. For example, if your traffic is highest during working hours, you would configure the time-based instances to start in the morning and shut down in the evening.

  • A set of load-based instances, which AWS OpsWorks starts and stops to handle unpredictable traffic variations. AWS OpsWorks starts them when the load approaches the capacity of the stacks' 24/7 and time-based instances, and stops them when the traffic returns to normal.. *** How Load-based Scaling Differs from Auto Healing http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-autoscaling.html Automatic load-based scaling uses load metrics that are averaged across all running instances. If the metrics remain between the specified thresholds, AWS OpsWorks does not start or stop any instances. With auto healing, on the other hand, AWS OpsWorks automatically starts a new instance with the same configuration when an instance stops responding. The instance may not be able to respond due to a network issue or some problem with the instance.

For example, suppose that your CPU upscaling threshold is 80%, and then one instance stops responding.

If auto healing is disabled, and the remaining running instances are able to keep average CPU utilization below 80%, AWS OpsWorks does not start a new instance. It starts a replacement instance only if the average CPU utilization across the remaining instances exceeds 80%.

If auto healing is enabled, AWS OpsWorks starts a replacement instance irrespective of the load thresholds.

  • Amazon Kinesis: Real-time processing of streaming Big Data :noexport:
  • real-time data processing for big data
  • Amazon Kinesis's pricing is based on two dimensions: Shard Hour and PUT Record.
  • One shard provides a capacity of 1MB/sec data input and 2MB/sec data output.
  • One shard can support up to 1000 PUT records per second.

Typical scenarios for Amazon Kinesis | Scenarios | Summary | |-----------------------------------------------------+---------| | Accelerated log and data feed intake and processing | | | Real-time metrics and reporting | | | Real-time data analytics | | | Complex stream processing | |

http://aws.amazon.com/kinesis/ ** Amazon Kinesis: Real-time processing of streaming Big Data An entrepreneur comes to you with a new idea for the next big social media application that, according to him, will be bigger than Twitter and Facebook. If he is serious he will obviously need something that can handle rapid and continuous data intake and aggregation in real time. Which Amazon service will obviously be the main focus on which all your infrastructure will be built around?

Amazon AppStream Amazon ElastiCache AWS Elastic Beanstalk Amazon Kinesis

D

You can use Amazon Kinesis for rapid and continuous data intake and aggregation. The type of data used includes IT infrastructure log data, application logs, social media, market data feeds, and web clickstream data. Because the response time for the data intake and processing is in real time, the processing is typically lightweight. The following are typical scenarios for using Amazon Kinesis: Accelerated log and data feed intake and processing You can have producers push data directly into a stream. For example, push system and application logs and they'll be available for processing in seconds. This prevents the log data from being lost if the front end or application server fails. Amazon Kinesis provides accelerated data feed intake because you don't batch the data on the servers before you submit it for intake. Real-time metrics and reporting You can use data collected into Amazon Kinesis for simple data analysis and reporting in real time. For example, your data-processing application can work on metrics and reporting for system and application logs as the data is streaming in, rather than wait to receive batches of data. Real-time data analytics This combines the power of parallel processing with the value of real-time data. For example, process website clickstreams in real time, and then analyze site usability engagement using multiple different Amazon Kinesis applications running in parallel. Complex stream processing You can create Directed Acyclic Graphs (DAGs) of Amazon Kinesis applications and data streams. This typically involves putting data from multiple Amazon Kinesis applications into another stream for downstream processing by a different Amazon Kinesis application. http://docs.aws.amazon.com/kinesis/latest/dev/introduction.html

  • Amazon Data Pipeline: Orchestration for Data-Driven Workflows :noexport: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/images/dp-how-dp-works-v2.png AWS Data Pipeline: enables you to automate the movement and transformation of data.

http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-troubleshooting.html Basic Trouble shooting with AWS Data Pipeline

Activities types AWS Data Pipeline supports | Activity | Summary | |----------------------+----------------------------------------------------------------------------------------| | CopyActivity | Copies data from one location to another. | | EmrActivity | Runs an Amazon EMR cluster. | | HiveActivity | Runs a Hive query on an Amazon EMR cluster. | | HiveCopyActivity | Run hive with advanced data filtering and support for S3DataNode and DynamoDBDataNode. | | PigActivity | Runs a Pig script on an Amazon EMR cluster. | | RedshiftCopyActivity | Copies data to and from Amazon Redshift tables. | | ShellCommandActivity | Runs a custom UNIX/Linux shell command as an activity. | | SqlActivity | Runs a SQL query on a database. | ** Sample of a Pipeline Definition File http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/cli-create-new-pipeline.html #+BEGIN_EXAMPLE { "objects": [ { "id": "MySchedule", "type": "Schedule", "startDateTime": "2013-08-18T00:00:00", "endDateTime": "2013-08-19T00:00:00", "period": "1 day" }, { "id": "S3Input", "type": "S3DataNode", "schedule": { "ref": "MySchedule" }, "filePath": "s3://example-bucket/source/inputfile.csv" }, { "id": "S3Output", "type": "S3DataNode", "schedule": { "ref": "MySchedule" }, "filePath": "s3://example-bucket/destination/outputfile.csv" }, { "id": "MyEC2Resource", "type": "Ec2Resource", "schedule": { "ref": "MySchedule" }, "instanceType": "m1.medium", "role": "DataPipelineDefaultRole", "resourceRole": "DataPipelineDefaultResourceRole" }, { "id": "MyCopyActivity", "type": "CopyActivity", "runsOn": { "ref": "MyEC2Resource" }, "input": { "ref": "S3Input" }, "output": { "ref": "S3Output" }, "schedule": { "ref": "MySchedule" } } ] } #+END_EXAMPLE ** TODO How to define task dependency for pipeline task

  • [#B] Amazon CloudFront: Global Content Delivery Network :noexport: You can use a single Amazon CloudFront distribution to deliver your entire website, including both static and dynamic (or interactive) content.

Limitation: | Name | Comment | |---------------------+----------------------------------------------------------------| | Max size | The maximum size of a single file through CloudFront is 20 GB. | | Default Expire time | By default, each object automatically expires after 24 hours. |

CloudFront charges in four areas: | Name | Comment | |--------------------------------------+---------| | Data Transfer Out | | | HTTP/HTTPS Requests | | | Invalidation Requests | | | Dedicated IP Custom SSL certificates | | ** DONE Serving Private Content through CloudFront CLOSED: [2015-04-08 Wed 18:22] http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html

Many companies that distribute content via the Internet want to restrict access to documents, business data, media streams, or content that is intended for selected users, for example, users who have paid a fee.

To securely serve this private content using CloudFront, you can do the following:

  • Require that your users access your private content by using special CloudFront signed URLs or signed cookies.

  • Require that your users access your Amazon S3 content using CloudFront URLs, not Amazon S3 URLs. Requiring CloudFront URLs isn't required, but we recommend it to prevent users from bypassing the restrictions that you specify in signed URLs or signed cookies.

When you create signed URLs or signed cookies to control access to your objects, you can specify the following restrictions:

  • An ending date and time, after which the URL is no longer valid.
  • (Optional) The date and time that the URL becomes valid.
  • (Optional) The IP address or range of addresses of the computers that can be used to access your content. ** DONE [#B] Choosing Between Signed URLs and Signed Cookies CLOSED: [2015-04-08 Wed 18:02] http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-choosing-signed-urls-cookies.html Use signed URLs in the following cases:
  • You want to use an RTMP distribution. Signed cookies aren't supported for RTMP distributions.
  • You want to restrict access to individual files, for example, an installation download for your application.
  • Your users are using a client (for example, a custom HTTP client) that doesn't support cookies.

Use signed cookies in the following cases:

  • You want to provide access to multiple restricted files, for example, all of the files for a video in HLS format or all of the files in the subscribers' area of a website.
  • You don't want to change your current URLs. ** DONE CloudFront and S3 security CLOSED: [2015-04-08 Wed 16:40] http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html http://www.comtechies.com/2014/01/how-to-host-highly-available-static.html http://awstrainingandcertification.s3.amazonaws.com/production/AWS_certified_solutions_architect_associate_examsample.pdf

A

You are building a system to distribute confidential training videos to employees. Using CloudFront, what method could be used to serve content that is stored in S3, but not publically accessible from S3 directly? A. Create an Origin Access Identity (OAI) for CloudFront and grant access to the objects in your S3 bucket to that OAI. B. Add the CloudFront account security group "amazon-cf/amazon-cf-sg" to the appropriate S3 bucket policy. C. Create an Identity and Access Management (IAM) User for CloudFront and grant access to the objects in your S3 bucket to that IAM User. D. Create a S3 bucket policy that lists the CloudFront distribution ID as the Principal and the target bucket as the Amazon Resource Name (ARN). ** DONE Choosing Between Canned and Custom Policies for Signed URLs CLOSED: [2015-04-08 Wed 18:06] http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-urls.html http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-creating-signed-url-custom-policy.html#private-content-custom-policy-statement-values

| Description | Canned Policy | Custom Policy | |--------------------------------------------------------------------------------------------------+---------------+----------------| | The policy statement can be reused for multiple objects | No | Yes | | You can specify the date and time that users can begin to access your content | No | Yes (optional) | | You can specify the date and time that users can no longer access your content | Yes | Yes | | You can specify the IP address or range of IP addresses of the users who can access your content | No | Yes (optional) | | The signed URL includes a base64-encoded version of the policy, which results in a longer URL | No | Yes | ** DONE RTMP protocol: Real Time Messaging Protocol CLOSED: [2015-04-08 Wed 16:41] https://console.aws.amazon.com/cloudfront/home?region=us-east-1#create-distribution: http://en.wikipedia.org/wiki/Real_Time_Messaging_Protocol Real Time Messaging Protocol (RTMP) was initially a proprietary protocol developed by Macromedia for streaming audio, video and data over the Internet, between a Flash player and a server. Macromedia is now owned by Adobe, which has released an incomplete version of the specification of the protocol for public use.

RTMP Create an RTMP distribution to speed up distribution of your streaming media files using Adobe Flash Media Server's RTMP protocol. An RTMP distribution allows an end user to begin playing a media file before the file has finished downloading from a CloudFront edge location. Note the following:

To create an RTMP distribution, you must store the media files in an Amazon S3 bucket. To use CloudFront live streaming, create a web distribution. ** DONE cloudfront expiration question CLOSED: [2015-05-09 Sat 10:14] CloudFront is an AWS web service that speeds up distribution of your static and dynamic web content, for example, .html, .css, .php, and image files, to end users. CloudFront delivers your content through a worldwide network of data centers called edge locations.

If an object (your content) in an edge location isn't frequently requested, CloudFront might evict the object-remove the object before its expiration date-to make room for objects that are more popular.

By default, each object automatically expires after ___ hours.

48 24 2 4

B CloudFront speeds up the distribution of your content by routing each user request to the edge location that can best serve your content. You also get increased reliability and availability because copies of your objects are now held in multiple edge locations around the world. By default, each object automatically expires after 24 hours. To specify a different expiration time, configure your origin to add a value for either the Cache-Control max-age directive or the Expires header field to each object. http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html

  • [#A] Amazon CloudWatch: Resource and Application Monitoring :noexport: https://aws.amazon.com/cloudwatch/

Limitation | Name | Comment | |-----------------------------+---------------------------------------------------------------------------------| | When custom metric reflects | take around 2 minutes to upload but can be viewed only after around 15 minutes. | | Dimensions of Metrics | You can assign up to ten dimensions to a metric. |

Metrics exist only in the region in which they are created. ** TODO [#A] By default, CloudWatch can't see what the hypervisor can't see. http://blog.celingest.com/en/2014/02/04/monitoring-ec2-memory-disk-usage-cloudwatch-custom-metrics/ CloudWatch relies on the information provided by this hypervisor, which can only see the most hardware-sided part of the instance's status, including CPU usage (but not load), total memory size (but not memory usage), number of I/O operations on the hard disks (but not it's partition layout and space usage) and network traffic (but not the processes generating it).

While this can be seen as a shortcoming on the hypervisor's part, it's actually very convenient in terms of security and performance, otherwise the hypervisor would be an all-seeing eye, with more powers than the root user itself. *** question In the basic monitoring package for EC2, Amazon CloudWatch provides the following metrics: A. web server visible metrics such as number failed transaction requests B. operating system visible metrics such as memory utilization C. database visible metrics such as number of connections D. hypervisor visible metrics such as CPU utilization

D *** DONE Question: cloudwatch metric CLOSED: [2015-04-13 Mon 14:39] AWS Certified SysOps Administrator Associate Practice Exam

Which of the following requires a custom CloudWatch metric to monitor? A. Disk usage activity of an Elastic Block Store volume attached to an Amazon EC2 instance B. Disk usage activity of the ephemeral volumes of an Amazon EC2 instance C. CPU Utilization of an Amazon Elastic Compute Cloud (EC2) instance D. Disk full percentage of an Elastic Block Store volume

C ** TODO [#A] How long is the metric delay: 15 min or 5 min? A user has setup an EBS backed instance and a CloudWatch alarm when the CPU utilization is more than 65%. The user has setup the alarm to watch it for 5 periods of 5 minutes each. The CPU utilization is 60% between 9 AM to 6 PM. The user has stopped the EC2 instance for 15 minutes between 11 AM to 11:15 AM. What will be the status of the alarm at 11:30 AM?

Alarm Insufficient Data OK Error

C Amazon CloudWatch alarm watches a single metric over a time period the user specifies and performs one or more actions based on the value of the metric relative to a given threshold over a number of time periods. The state of the alarm will be OK for the whole day. When the user stops the instance for three periods the alarm may not receive the data. http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/AlarmThatSendsEmail.html ** TODO How many namespaces are supported in cloudwatch http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#Statistic ** TODO [#B] Use cloudwatch to meter the billing activity http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/monitor_estimated_charges_with_cloudwatch.html https://aws.amazon.com/blogs/aws/monitor-estimated-costs-using-amazon-cloudwatch-billing-metrics-and-alarms/ http://serverfault.com/questions/350971/how-can-i-monitor-daily-spending-on-aws ** TODO [#B] Cloudwatch define an alarm to monitor when an EC2 instance is started or stopped http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/UsingAlarmActions.html ** # --8<-------------------------- separator ------------------------>8-- ** DONE Feature: Dimension of Metrics CLOSED: [2015-05-07 Thu 16:38] http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#Statistic A dimension is a name/value pair that helps you to uniquely identify a metric.

you can think of dimensions as categories for those characteristics.

CloudWatch treats each unique combination of dimensions as a separate metric. ** DONE [#A] Monitoring Amazon EC2 metrics :IMPORTANT: CLOSED: [2015-04-13 Mon 14:26] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring_ec2.html http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring_automated_manual.html | Item to Monitor | Amazon EC2 Metric | Monitoring Script | |------------------------------------------------+-------------------------------+---------------------------------------------| | CPU utilization | CPU Utilization | | | Memory utilization | | Monitoring Scripts for Amazon EC2 Instances | | Memory used | | Monitoring Scripts for Amazon EC2 Instances | | Memory available | | Monitoring Scripts for Amazon EC2 Instances | | Network utilization | NetworkIn, NetworkOut | | | Disk performance | DiskReadOps, DiskWriteOps | | | Disk Swap utilization (Linux instances only) | | | | Swap used (Linux instances only) | | Monitoring Scripts for Amazon EC2 Instances | | Page File utilization (Windows instances only) | | Monitoring Scripts for Amazon EC2 Instances | | Page File used (Windows instances only) | | Monitoring Scripts for Amazon EC2 Instances | | Page File available (Windows instances only) | | Monitoring Scripts for Amazon EC2 Instances | | Disk Reads/Writes | DiskReadBytes, DiskWriteBytes | | | Disk Space utilization (Linux instances only) | | Monitoring Scripts for Amazon EC2 Instances | | Disk Space used (Linux instances only) | | Monitoring Scripts for Amazon EC2 Instances | | Disk Space available (Linux instances only) | | Monitoring Scripts for Amazon EC2 Instances | ** DONE [#B] Define a Custom Metric CLOSED: [2015-04-15 Wed 10:37] http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/PublishMetrics.html

aws cloudwatch put-metric-data --metric-name RequestLatency --namespace GetStarted --timestamp 2014-02-14T20:30:00Z --value 87 --unit Milliseconds aws cloudwatch put-metric-data --metric-name RequestLatency --namespace GetStarted --timestamp 2014-02-14T21:30:00Z --statistic-values Sum=577,Minimum=65,Maximum=189,SampleCount=5 --unit Milliseconds

aws cloudwatch put-metric-data --metric-name RequestLatency --namespace GetStarted --statistic-values Sum=806,Minimum=47,Maximum=328,SampleCount=6 --unit Milliseconds

aws cloudwatch get-metric-statistics --namespace GetStarted --metric-name RequestLatency --statistics Average --start-time 2014-02-14T00:00:00Z --end-time 2014-02-15T00:00:00Z --period 60

http://aws.amazon.com/cloudwatch/faqs/ Q: What is a Custom Metric?

You can use Amazon CloudWatch to monitor data produced by your own applications, scripts, and services. A custom metric is any metric you provide to Amazon CloudWatch. For example, you can use custom metrics as a way to monitor the time to load a web page, request error rates, number of processes or threads on your instance, or amount of work performed by your application. You can get started with custom metrics by using the PutMetricData API, our sample monitoring scripts for Windows and Linux, as well as a number of applications and tools offered by AWS partners. ** DONE [#B] Amazon CloudWatch Logs CLOSED: [2015-04-15 Wed 11:25] http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/EC2NewInstanceCWL.html

http://aws.amazon.com/cloudwatch/faqs/ Q: What is Amazon CloudWatch Logs? Amazon CloudWatch Logs lets you monitor and troubleshoot your systems and applications using your existing system, application and custom log files. With CloudWatch Logs, you can monitor your logs, in near real time, for specific phrases, values or patterns. For example, you could set an alarm on the number of errors that occur in your system logs or view graphs of latency of web requests from your application logs. You can then view the original log data to see the source of the problem. Log data can be stored and accessed for up to ten years in highly durable, low-cost storage so you don't have to worry about filling up hard drives.

Q: What kinds of things can I do with CloudWatch Logs? CloudWatch Logs is capable of monitoring and storing your logs to help you better understand and operate your systems and applications. You can use CloudWatch Logs in a number of ways. When you use CloudWatch logs, your existing log data is used for monitoring, so no code changes are required. Real time Application and System MonitoringYou can use CloudWatch Logs to monitor applications and systems using log data. For example, CloudWatch Logs can track the number of errors that occur in your application logs and send you a notification whenever the rate of errors exceeds a threshold you specify. CloudWatch Logs uses your log data for monitoring; so, no code changes are required.Long Term Log RetentionYou can use CloudWatch Logs to store your log data for up to ten years in highly durable and cost effective storage without worrying about hard drives running out of space. The CloudWatch Logs Agent makes it easy to quickly move both rotated and non rotated log files off of a host and into the log service. You can then access the raw log event data when you need it. ** # --8<-------------------------- separator ------------------------>8-- ** DONE cloudwatch: StatusCheckFailed, StatusCheckFailed_Instance, StatusCheckFailed_System CLOSED: [2015-04-13 Mon 14:31] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring_automated_manual.html | Name | Summary | |------------------------+----------------------------------------------| | System Status Checks | problems require AWS involvement to repair | | Instance Status Checks | problems require your involvement to repair. |

  • System Status Checks: monitor the AWS systems required to use your instance to ensure they are working properly. e.g. Loss of network connectivity, Loss of system power, Software issues on the physical host, Hardware issues on the physical host

  • Instance Status Checks: monitor the software and network configuration of your individual instance. e. g. Failed system status checks, Misconfigured networking or startup configuration, Exhausted memory, Corrupted file system, Incompatible kernel ** DONE Question: cloudwatch autoscale CLOSED: [2015-04-13 Mon 14:45] AWS Certified SysOps Administrator Associate Practice Exam Time Remaining: 02:28 10 of 20. As an application has increased in popularity, reports of performance issues have grown. The current configuration initiates scaling actions based on Avg CPU utilization; however during reports of slowness, CloudWatch graphs have shown that Avg CPU remains steady at 40 percent. This is well below the alarm threshold of 60 percent. Your developers have discovered that, due to the unique design of the application, performance degradation occurs on an instance when it is processing more than 200 threads.

What is the best way to ensure that your application scales to match demand?

A. Launch two to six additional instances outside of the AutoScaling group to handle the additional load. B. Populate a custom CloudWatch metric for concurrent sessions and initiate scaling actions based on that metric instead of on CPU use. C. Empirically determine the expected CPU use for 200 concurrent sessions and adjust the CloudWatch alarm threshold to be that CPU use. D. Add a script to each instance to detect the number of concurrent sessions. If the number of sessions remains over 200 for five minutes, have the instance increase the desired capacity of the AutoScaling group by one. ** DONE AWS delete Custom Metrics CLOSED: [2015-04-15 Wed 10:43] http://stackoverflow.com/questions/23229268/how-do-you-delete-an-aws-cloudwatch-metric

There is no API to delete AWS Cloudwatch Metrics. Just wait two weeks after your last metric has been push, it will disappear automatically

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch.html

When you use the mon-put-data command, you must use a date range within the past two weeks. There is currently no function to delete data points. Amazon CloudWatch automatically deletes data points with a timestamp more than two weeks old. ** DONE CPUCreditUsage and CPUCreditBalance CLOSED: [2015-04-15 Wed 10:48] http://aws.amazon.com/ec2/faqs/ http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/ec2-metricscollected.html

  • CPUCreditUsage indicates the amount of CPU Credits used.
  • CPUCreditBalance indicates the balance of CPU Credits. ** # --8<-------------------------- separator ------------------------>8-- ** DONE Cloudwatch state question CLOSED: [2015-05-01 Fri 14:51] http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-instance-monitoring.html You have set up CloudWatch to monitor a content and media server that you have just finished deploying and it seems to be monitoring it successfully. How many possible alarm states can you expect to see in Cloudwatch?

3 10 6 - 3 system defined and 3 user defined. 2

A: OK, ALARM, INSUFFICIENT_DATA ** DONE custom metric question CLOSED: [2015-05-09 Sat 11:08] A user has setup a web application on EC2. The user is generating a log of the application performance at every second. There are multiple entries for each second. If the user wants to send that data to CloudWatch every minute, what should he do?

It is not possible to send the custom metric to CloudWatch every minute The user should send only the data of the 60th second as CloudWatch will map the receive data timezone with the sent data timezone Give CloudWatch the Min, Max, Sum, and SampleCount of a number of every minute Calculate the average of one minute and send the data to CloudWatch

D Amazon CloudWatch aggregates statistics according to the period length that the user has specified while getting data from CloudWatch. The user can publish as many data points as he wants with the same or similar time stamps. CloudWatch aggregates them by the period length when the user calls get statistics about those data points. CloudWatch records the average (sum of all items divided by the number of items) of the values received for every 1-minute period, as well as the number of samples, maximum value, and minimum value for the same time period. CloudWatch will aggregate all the data which have time stamps within a one-minute period. http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/publishingMetrics.html ** DONE custom metric question CLOSED: [2015-05-02 Sat 14:03] A user is publishing custom metrics to CloudWatch. Which of the below mentioned statements will help the user understand the functionality better?

The user can use the CloudWatch Import tool If the user is uploading the custom data, the user must supply the namespace, timezone, and metric name as part of the command The user can view as well as upload data using the console, CLI and APIs The user should be able to see the data in the console after around 15 minutes

D

AWS CloudWatch supports the custom metrics. The user can always capture the custom data and upload the data to CloudWatch using CLI or APIs. The user has to always include the namespace as a part of the request. However, the other parameters are optional. If the user has uploaded data using CLI, he can view it as a graph inside the console. The data will take around 2 minutes to upload but can be viewed only after around 15 minutes. http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/publishingMetrics.html ** DONE cloudwatch timezone CLOSED: [2015-05-02 Sat 14:17] A user is checking the CloudWatch metrics from the AWS console. The user notices that the CloudWatch data is coming in UTC. The user wants to convert the data to a local time zone. How can the user perform this?

The user should have send the local timezone while uploading the data so that CloudWatch will show the data only in the local timezone The CloudWatch data is always in UTC; the user has to manually convert the data In the CloudWatch console select the local timezone under the Time Range tab to view the data as per the local timezone In the CloudWatch dashboard the user should set the local timezone so that CloudWatch shows the data only in the local time zone

D If the user is viewing the data inside the CloudWatch console, the console provides options to filter values either using the relative period, such as days/hours or using the Absolute tab where the user can provide data with a specific date and time. The console also provides the option to search using the local timezone under the time range caption in the console because the time range tab allows the user to change the time zone. http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/modify_graph_date_time.html ** DONE metric of DiskReadBytes/DiskWriteBytes is for ephermal storage CLOSED: [2015-05-02 Sat 14:21] A user has configured CloudWatch monitoring on an EBS backed EC2 instance. If the user has not attached any additional device, which of the below mentioned metrics will always show a 0 value?

NetworkOut NetworkIn DiskReadBytes CPUUtilization C

CloudWatch is used to monitor AWS as the well custom services. For EC2 when the user is monitoring the EC2 instances, it will capture the 7 Instance level and 3 system check parameters for the EC2 instance. Since this is an EBS backed instance, it will not have ephermal storage attached to it. Out of the 7 EC2 metrics, the 4 metrics DiskReadOps, DiskWriteOps, DiskReadBytes and DiskWriteBytes are disk related data and available only when there is ephermal storage attached to an instance. For an EBS backed instance without any additional device, this data will be 0. http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/ec2-metricscollected.html ** DONE cloudwatch the billing metric data is system-wise, instead of region-wise CLOSED: [2015-05-08 Fri 14:56] The billing metric data is stored in the US East (Northern Virginia) region and represents worldwide charges.

A user has configured the AWS CloudWatch alarm for estimated usage charges in the US East region. Which of the below mentioned statements is not true with respect to the estimated charges?

It will include the estimated charges of every AWS service It will store the estimated charges data of the last 14 days The metric data will show data specific to that region The metric data will represent the data of all the regions

C

When the user has enabled the monitoring of estimated charges for the AWS account with AWS CloudWatch, the estimated charges are calculated and sent several times daily to CloudWatch in the form of metric data. This data will be stored for 14 days. The billing metric data is stored in the US East (Northern Virginia) region and represents worldwide charges. This data also includes the estimated charges for every service in AWS used by the user, as well as the estimated overall AWS charges.

http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/gs_monitor_estimated_charges_with_cloudwatch.html ** DONE Feature: CloudWatch: It's possible to show multiple metrics with different units on the same graph. CLOSED: [2015-05-09 Sat 11:18] A user is displaying the CPU utilization, and Network in and Network out CloudWatch metrics data of a single instance on the same graph. The graph uses one Y-axis for CPU utilization and Network in and another Y-axis for Network out. Since Network in is too high, the CPU utilization data is not visible clearly on graph to the user. How can the data be viewed better on the same graph?

Add a third Y-axis with the console to show all the data in proportion It is not possible to show multiple metrics with the different units on the same graph Change the units of CPU utilization so it can be shown in proportion with Network Change the axis of Network by using the Switch command from the graph

D

Amazon CloudWatch provides the functionality to graph the metric data generated either by the AWS services or the custom metric to make it easier for the user to analyse. It is possible to show the multiple metrics with different units on the same graph. If the graph is not plotted properly due to a difference in the unit data over two metrics, the user can change the Y-axis of one of the graph by selecting that graph and clicking on the Switch option. http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/switch_graph_axes.html

  • [#B] Amazon SQS: Message Queue Service :noexport: http://aws.amazon.com/sqs/ | Name | Summary | |----------------------------+------------------------------------------------------| | Default Visibility Timeout | | | Message Retention Period | Avoid Queue overload with very old messages | | Maximum Message Size | Avoid too big message | | Delivery Delay | Delayed queue, postpone the delivery of new messages | | Receive Message Wait Time | Speed up for fetching nothing |
  • Dead Letter Queue Settings: Maximum Receives The maximum number of times a message can be received before it is sent to Dead Letter Queue.

  • You can use SQS to transmit any volume of data, at any level of throughput

  • Only five APIs: CreateQueue, SendMessage, ReceiveMessage, ChangeMessageVisibility, and DeleteMessage.

  • The message payload can contain up to 256KB of text in any format. Each 64KB 'chunk' of payload is billed as 1 request. For example, a single API call with a 256KB payload will be billed as four requests.

  • Messages can be retained in queues for up to 14 days.

  • When a message is received, it becomes "locked" while being processed. ** DONE [#A] SQS does not guarantee first in, first out delivery of messages. CLOSED: [2015-05-07 Thu 16:48] You are building an online store on AWS that uses SQS to process your customer orders. Your backend system needs those messages in the same sequence the customer orders have been put in. How can you achieve that?

You can do this with SQS but you also need to use SWF It is not possible to do this with SQS You can use sequencing information on each message Messages will arrive in the same order by default

C

Amazon SQS is engineered to always be available and deliver messages. One of the resulting tradeoffs is that SQS does not guarantee first in, first out delivery of messages. For many distributed applications, each message can stand on its own, and as long as all messages are delivered, the order is not important. If your system requires that order be preserved, you can place sequencing information in each message, so that you can reorder the messages when the queue returns them.

http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/Welcome.html ** [#A] Feature: Amazon SQS Delay Queues http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-delay-queues.html

  • Delay queues are similar to visibility timeouts in that both features make messages unavailable to consumers for a specific period of time.

The difference between delay queues and visibility timeouts:

  • For delay queues, a message is hidden when it is first added to the queue
  • For visibility timeouts, a message is hidden only after a message is retrieved from the queue. http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/images/Delay_Queues.png ** DONE Feature: Amazon SQS Long Polling: reduction of the number of empty responses CLOSED: [2015-04-02 Thu 15:09] http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-long-polling.html
  • Long polling allows the Amazon SQS service to wait until a message is available in the queue before sending a response. ** DONE Feature: SQS Dead Letter Queues CLOSED: [2015-04-02 Thu 15:07] http://aws.amazon.com/sqs/details/

Developers can handle stuck messages with Dead Letter Queues. When the maximum receive count is exceeded for a message it will be moved to the Dead Letter Queue (DLQ) associated with the original queue. Developers can setup separate consumer processes for DLQs which can help analyze and understand why messages are getting stuck.

  • [#B] Amazon SNS: Push Notification Service :noexport:
  • In Amazon SNS, there are two types of clients-publishers and subscribers-also referred to as producers and consumers.

  • SNS Common Actions | Name | Summary | |-----------------------------+--------------------------------------------------------------------------------| | Create topic | Create a communication channel to send messages and subscribe to notifications | | Create platform application | Create a platform application for mobile devices | | Create subscription | Subscribe an endpoint to a topic to receive messages published to that topic | | Publish message | Publish a message to a topic or as a direct publish to a platform endpoint |

  • SNS Notification Deliveries http://docs.aws.amazon.com/sns/latest/dg/images/sns-how-works.png | Endpoint Type | Free Tier | Price | |----------------------------+----------------------------------------+-------------------| | Mobile Push Notifications | 1 million | $0.50 per million | | SMS | 100 | $0.75 per 100 | | email/email-JSON | 1,000 | $2.00 per 100,000 | | HTTP/s | 100,000 | $0.60 per million | | Simple Queue Service (SQS) | No charge for deliveries to SQS Queues | | |----------------------------+----------------------------------------+-------------------| | Application | | | | AWS Lambda | | | ** TODO The number of notifications you publish, the number of notifications you deliver ** TODO [#A] Application in SNS http://docs.aws.amazon.com/sns/latest/dg/SNSMobilePush.html

  • AWS Import/Export: transfer data with your own device bypass internet :noexport: http://docs.aws.amazon.com/AWSImportExport/latest/DG/CHAP_GuideAndLimit.html
  • The device must be compatible with Red Hat Linux and warranted by the manufacturer to support eSATA, USB 3.0, or USB 2.0 interface on Linux.
  • Maximum device size is 14 inches/35 centimeters high by 19 inches/48 centimeters wide by 36 inches/91 centimeters deep (8Us in a standard 19 inch/48 centimeter rack).
  • Maximum weight is 50 pounds/22.5 kilograms.
  • Maximum device capacity is 16 TB.

http://aws.amazon.com/importexport/

http://aws.amazon.com/importexport/getting-started/

http://aws.amazon.com/importexport/pricing/

To use AWS Import/Export you simply:

Prepare a portable storage device (see the Product Details page for supported devices). Submit a Create Job request to AWS that includes your Amazon S3 bucket, Amazon EBS or Amazon Glacier region, AWS Access Key ID, and return shipping address. You'll receive back a unique identifier for the job, a digital signature for authenticating your device, and an AWS address to which to ship your storage device. Securely identify and authenticate your device. For Amazon S3, place the signature file on the root directory of your device. For Amazon EBS or Amazon Glacier, tape the signature barcode to the exterior of the device. Ship your device along with its interface connectors, and power supply to AWS.

When your package arrives, it will be processed and securely transferred to an AWS data center, where your device will be attached to an AWS Import/Export station. After the data load completes, the device will be returned to you. ** DONE AWS Import/Export requirements CLOSED: [2015-05-08 Fri 22:56] You are about to send an external hard drive to AWS so that you can import a huge amount of data from it. However you remember that there are some limitations on this, one of those being the maximum device capacity. What is the maximum device capacity of a storage device when sent to AWS for the AWS Import/Export service?

16 TB 10 TB 17 TB 5 TB

A AWS Import/Export accelerates transferring large amounts of data between the AWS cloud and portable storage devices that you mail to AWS. AWS transfers data directly onto and off of your storage devices using Amazon's high-speed internal network. To connect your storage device to one AWS's Import/Export stations, your device must meet the following requirements: The device must be compatible with Red Hat Linux and warranted by the manufacturer to support eSATA, USB 3.0, or USB 2.0 interface on Linux. Maximum device size is 14 inches/35 centimeters high by 19 inches/48 centimeters wide by 36 inches/91 centimeters deep (8Us in a standard 19 inch/48 centimeter rack). Maximum weight is 50 pounds/22.5 kilograms. Maximum device capacity is 16 TB. Power requirements vary by region. http://docs.aws.amazon.com/AWSImportExport/latest/DG/CHAP_GuideAndLimit.html

  • Amazon Glacier: Archive Storage in the Cloud :noexport: Glacier Limiation | Name | Comment | |-----------------+-------------------------------------------------------------------------------------------------------| | Data encryption | Amazon Glacier automatically encrypts the data using AES-256. | | Vault count | You can create up to 1,000 vaults per account per region. | | Storage charge | Storage in Glacier costs 1 cent per GB for each month | | Job expiration | A job won't expire for at least 24 hours after completion, so you can download the output within 24 h | |-----------------+-------------------------------------------------------------------------------------------------------| | Archives Size | In a single operation, you can upload archives from 1 byte to up to 4 GB in size. | | Archives Size | Using the multipart upload API, you can upload large archives, up to about 40,000 GB(10,000 * 4 GB). |
  • The Amazon Glacier vault inventory is only updated once a day.

  • Most Amazon Glacier jobs take about four hours to complete.

  • Unlike traditional systems which can require laborious data verification and manual repair, Glacier performs regular, systematic data integrity checks and is built to be automatically self-healing.

  • Amazon Glacier prepares an inventory for each vault periodically, every 24 hours. If there have been no archive additions or deletions to the vault since the last inventory, the inventory date is not updated. When you initiate a job for a vault inventory, Amazon Glacier returns the last inventory it generated, which is a point-in-time snapshot and not real-time data.

http://docs.aws.amazon.com/amazonglacier/latest/dev/working-with-vaults.html

Constraints of Glacier:

  • Data can't be restored immediately but need several hours (4-5) before the files are available after issuing the restore request
  • Costs depends in the amount of data stored but also restored. ** DONE [#B] inventory info may not up to date: update every 24 hours for each vault CLOSED: [2015-05-04 Mon 12:43]

A user has uploaded 1 archive to an empty existing Glacier vault. The user is trying to get the inventory of archives using a job. Will Glacier return the list of uploaded archives in that job?

It is not possible to get the list of the archive using the vault inventory Job It may or may not list the archive in the result It will not list the archive in the job result It will list the archive in the job result

B A user has uploaded 1 archive to an empty existing Glacier vault. The user is trying to get the inventory of archives using a job. Will Glacier return the list of uploaded archives in that job?

It is not possible to get the list of the archive using the vault inventory Job It may or may not list the archive in the result It will not list the archive in the job result It will list the archive in the job result http://docs.aws.amazon.com/amazonglacier/latest/dev/working-with-vaults.html ** DONE [#A] How I know the upload of big archive to Glacier is done correctly: It's an synchronized call CLOSED: [2015-05-04 Mon 10:26] http://docs.aws.amazon.com/amazonglacier/latest/dev/configuring-notifications-console.html http://docs.aws.amazon.com/amazonglacier/latest/dev/uploading-an-archive.html ** DONE Difference between "Archive Retrieval Job" and "Valut Inventory Retrival Job" CLOSED: [2015-05-04 Mon 10:22] Inventory: metadata ** DONE When retrieve data, is it stored in a temporary S3 object payed by Amazon? CLOSED: [2015-05-01 Fri 16:41] Get from API, after Amazon notifies us the retrival is ready http://docs.aws.amazon.com/amazonglacier/latest/dev/downloading-an-archive-using-java.html http://docs.aws.amazon.com/amazonglacier/latest/dev/api-initiate-job-post.html ** DONE glacier: you can create up to 1,000 vaults per account per region CLOSED: [2015-05-06 Wed 10:51] Having set up an enormous amount of storage for a customer in Amazon Glacier you are concerned that there is some sort of limitation. You have organised all the data in vaults as is the standard for Amazon Glacier. How many vaults can you create?

1,000 vaults per account total 1,000 vaults per account per region 100 vaults per account per region 10,000 vaults per account per region

C

You use vaults to organize the data you store in Amazon Glacier. Each archive is stored in a vault of your choice. You may control access to your data by setting vault-level access policies using the AWS Identity and Access Management (IAM) service. You can also attach notification policies to your vaults. These enable you or your application to be notified when data that you have requested for retrieval is ready for download. You can create up to 1,000 vaults per account per region. http://aws.amazon.com/glacier/faqs/ ** TODO Upload data to Glacier and download data locally http://www.virtualtothecore.com/en/freeze-your-backups-into-aws-glacier/ | Name | Summary | |--------------+----------------------------| | upload files | aws glacier upload-archive | | | |

aws glacier upload-archive help

aws glacier upload-archive --vault-name valut-denny1 --account-id 309775215851

#+BEGIN_EXAMPLE macs-MacBook-Air:org_share mac$ aws glacier upload-archive --vault-name valut-denny1 --account-id 309775215851 --body ./question.org { "archiveId": "Im_0LHkdz6AhMoGlq_CIzciivvLnrwYxw0rkp8vzuFePmJ-YfhIu819oyTPl4ikq-6vKShPVZF-r1RWuSF5jTCVllfa5G9FPr9Y0TNURxZlJVbGU1VKeg5xEQWDEr83g_A5aLbNRlA", "checksum": "4fd49cbb1f708d2cc7eaa7f616ca867d7c6ec7495a07430b01d91a6e5c9d313e", "location": "/309775215851/vaults/valut-denny1/archives/Im_0LHkdz6AhMoGlq_CIzciivvLnrwYxw0rkp8vzuFePmJ-YfhIu819oyTPl4ikq-6vKShPVZF-r1RWuSF5jTCVllfa5G9FPr9Y0TNURxZlJVbGU1VKeg5xEQWDEr83g_A5aLbNRlA" }

macs-MacBook-Air:org_share mac$ aws glacier describe-vault --vault-name valut-denny1 --account-id 309775215851 { "SizeInBytes": 0, "VaultARN": "arn:aws:glacier:us-east-1:309775215851:vaults/valut-denny1", "NumberOfArchives": 0, "CreationDate": "2015-04-30T19:10:37.798Z", "VaultName": "valut-denny1" }

macs-MacBook-Air:org_share mac$ aws glacier list-vaults --account-id 309775215851 { "VaultList": [ { "SizeInBytes": 0, "VaultARN": "arn:aws:glacier:us-east-1:309775215851:vaults/valut-denny1", "CreationDate": "2015-04-30T19:10:37.798Z", "VaultName": "valut-denny1", "NumberOfArchives": 0 } ] }

macs-MacBook-Air:org_data mac$ aws glacier delete-archive --vault-name valut-denny1 --account-id 309775215851 --archive-id Im_0LHkdz6AhMoGlq_CIzciivvLnrwYxw0rkp8vzuFePmJ-YfhIu819oyTPl4ikq-6vKShPVZF-r1RWuSF5jTCVllfa5G9FPr9Y0TNURxZlJVbGU1VKeg5xEQWDEr83g_A5aLbNRlA macs-MacBook-Air:org_data mac$ echo $? 0 #+END_EXAMPLE

  • Amazon Redshift: Manage PB-scale Data Warehouse Service :noexport:
  • Redshift uses standard PostgreSQL JDBC and ODBC drivers, allowing you to use a wide range of familiar SQL clients. ** DONE [#A] Redshift can store Data of PB and it looks like PostgreSQL. How it's implemented? CLOSED: [2015-05-04 Mon 22:53] http://aws.amazon.com/redshift/faqs/ Q: How does the performance of Amazon Redshift compare to most traditional databases for data warehousing and analytics?

Amazon Redshift uses a variety of innovations to achieve up to ten times higher performance than traditional databases for data warehousing and analytics workloads

  • Columnar Data Storage
  • Advanced Compression
  • Massively Parallel Processing (MPP) ** DONE How Redshift differentiate from RDS? CLOSED: [2015-05-04 Mon 22:53] http://aws.amazon.com/redshift/faqs/

Q: When would I use Amazon Redshift vs. Amazon RDS?

Both Amazon Redshift and Amazon RDS enable you to run traditional relational databases such as MySQL, Oracle and SQL Server in the cloud while offloading database administration. Customers use Amazon RDS databases both for online-transaction processing (OLTP) and for reporting and analysis. Amazon Redshift harnesses the scale and resources of multiple nodes and uses a variety of optimizations to provide order of magnitude improvements over traditional databases for analytic and reporting workloads against very large data sets. Amazon Redshift provides an excellent scale-out option as your data and query complexity grows or if you want to prevent your reporting and analytic processing from interfering with the performance of your OLTP workload. ** DONE When would I use Amazon Redshift vs. Amazon Elastic MapReduce (Amazon EMR)? CLOSED: [2015-05-04 Mon 22:52] http://aws.amazon.com/redshift/faqs/ Q: When would I use Amazon Redshift vs. Amazon Elastic MapReduce (Amazon EMR)?

Amazon Redshift is ideal for large volumes of structured data that you want to persist and query using standard SQL and your existing BI tools. Amazon EMR is ideal for processing and transforming unstructured or semi-structured data to bring in to Amazon Redshift and is also a much better option for data sets that are relatively transitory, not stored for long-term use. ** DONE Connect to Redshift cluster by psql CLOSED: [2015-05-04 Mon 22:46] http://docs.aws.amazon.com/redshift/latest/mgmt/connecting-from-psql.html psql -h -U -d -p ** DONE Amazon Redshift is used to build up data warehouse CLOSED: [2015-05-04 Mon 22:33] You have just been given a scope for a new client who has an enormous amount of data(petabytes) that he constantly needs analysed. Currently he is paying a huge amount of money for a data warehousing company to do this for him and is wondering if AWS can provide a cheaper solution. Do you think AWS has a solution for this?

Yes.Amazon Redshift Yes.Amazon RDS Yes.Your choice of relational AMIs on Amazon EC2 and EBS Yes.Amazon DynamoDB

A Amazon Redshift is a fast, fully managed, petabyte-scale data warehouse service that makes it simple and cost-effective to efficiently analyze all your data using your existing business intelligence tools. You can start small for just $0.25 per hour with no commitments or upfront costs and scale to a petabyte or more for $1,000 per terabyte per year, less than a tenth of most other data warehousing solutions. Amazon Redshift delivers fast query performance by using columnar storage technology to improve I/O efficiency and parallelizing queries across multiple nodes. Redshift uses standard PostgreSQL JDBC and ODBC drivers, allowing you to use a wide range of familiar SQL clients. Data load speed scales linearly with cluster size, with integrations to Amazon S3, Amazon DynamoDB, Amazon Elastic MapReduce, Amazon Kinesis or any SSH-enabled host.

https://aws.amazon.com/running_databases/#redshift_anchor ** DONE [#B] Amazon Redshift question CLOSED: [2015-05-04 Mon 22:36] After recommending Amazon Redshift as an alternative solution to paying data warehouses to analyse a customers data he asks you to explain to him why you are recommending Redshift. Which of the following would not be a good response to his request?

It uses primary keys to access data and doesn't need complex query capabilities like transactions or joins. You don't have the administrative burden of running one's own data warehouse and dealing with setup, durability, monitoring, scaling and patching. It has high performance at scale as data and query complexity grows. You can run existing or new applications, code, or tools that require a relational database.

D Amazon Redshift delivers fast query performance by using columnar storage technology to improve I/O efficiency and parallelizing queries across multiple nodes. Redshift uses standard PostgreSQL JDBC and ODBC drivers, allowing you to use a wide range of familiar SQL clients. Data load speed scales linearly with cluster size, with integrations to Amazon S3, Amazon DynamoDB, Amazon Elastic MapReduce, Amazon Kinesis or any SSH-enabled host. AWS recommends Amazon Redshift for customers who have a combination of needs, such as: High performance at scale as data and query complexity grows Desire to prevent reporting and analytic processing from interfering with the performance of OLTP workloads Large volumes of structured data to persist and query using standard SQL and existing BI tools Desire to the administrative burden of running one's own data warehouse and dealing with setup, durability, monitoring, scaling and patching

https://aws.amazon.com/running_databases/#redshift_anchor ** DONE Redshift Quiz CLOSED: [2015-05-04 Mon 23:03] http://searchaws.techtarget.com/quiz/Test-your-knowledge-Amazon-Redshift-quiz

  • Amzon Machine Learning :noexport: http://searchaws.techtarget.com/answer/What-are-the-benefits-of-a-machine-learning-service http://searchaws.techtarget.com/news/2240241093/Azure-Machine-Learning-turns-heads-among-AWS-users

| Category | Summary | |------------------------------------+-----------------------------------| | A binary classification model | | | A multi-class classification model | | | A regression model | yields an actual value or number. |

A machine learning service can address three different types of tasks:

  1. A binary classification model can predict one of two possible outcomes such as a yes or no response.

  2. A multi-class classification model can predict multiple conditions. Multi-class classification, for example, could detect a customer's Web shopping behaviors.

  3. A regression model yields an actual value or number. Regression models can predict the best selling price for a product or the number of units that will sell.

  • Amazon DevPay :noexport:
  • Amazon DevPay is the only payments application that automatically meters your customers' usage of Amazon Web Services (such as Amazon S3 or Amazon EC2) and allows you to charge your customers for that usage at whatever price you choose.

  • Amazon DevPay shares the risk of customer nonpayment with developers. You're responsible for the cost of AWS services that a customer consumes only up to the amount that the customer actually pays. If a customer does not pay, AWS do not charge you these costs.

  • Amazon DevPay is the simplest way for developers to get paid for Amazon EC2 Machine Images (AMIs) or applications they build on top of Amazon S3.

  • Amazon DevPay allows developers to start selling their application without using complex APIs or writing code to build an order pipeline or a billing system.

  • [#B] AWS Direct Connect: Dedicated Network Connection to AWS :noexport:
  • AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to AWS.

  • If you are connecting from a remote location, you can work with an APN Partner supporting Direct Connect or a network carrier of your choice.

http://aws.amazon.com/directconnect/ https://docs.aws.amazon.com/directconnect/latest/UserGuide/getstarted.html ** DONE AWS directconnect VS AWS VPN question CLOSED: [2015-05-06 Wed 11:47] AWS Certified SysOps Administrator Associate Practice Exam Time Remaining: 04:27 4 of 20. An organization has established an Internet-based VPN connection between their on-premises data center and AWS. They are considering migrating from VPN to AWS DirectConnect.

Which operational concern should drive an organization to consider switching from an Internet-based VPN connection to AWS DirectConnect?

A. AWS DirectConnect provides greater redundancy than an Internet-based VPN connection. B. AWS DirectConnect provides greater resiliency than an Internet-based VPN connection. C. AWS DirectConnect provides greater bandwidth than an Internet-based VPN connection. D. AWS DirectConnect provides greater control of network provider selection than an Internet-based VPN connection. Mark this item for later review.

2015 KRYTERION, Inc. and KRYTERION, Limited - All Rights Reserved.

D

  • [#B] AWS VM Import/Export: too many limitation :noexport: http://aws.amazon.com/ec2/vm-import/ http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/UsingVirtualMachinesinAmazonEC2.html http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instances_of_your_vm.html

VM Import/Export enables you to easily import virtual machine images from your existing environment to Amazon EC2 instances and export them back to your on-premises environment.

To import your images, use the Amazon EC2 API tools, or if you use the VMware vSphere virtualization platform, the Amazon EC2 VM Import Connector to target a virtual machine (VM) image in your existing environment. You then specify which Availability Zone and instance type you want to run in Amazon EC2, and VM Import/Export will automatically transfer the image file and create your instance. Once you have imported your VMs, you can take advantage of Amazon's elasticity, scalability and monitoring via offerings like Auto Scaling, Elastic Load Balancing and CloudWatch to support your imported images. Your instance will be up and running in Amazon EC2 in as little time as it takes to upload your image.

You can export previously imported EC2 instances using the Amazon EC2 API tools. You simply specify the target instance, virtual machine file format and a destination Amazon S3 bucket, and VM Import/Export will automatically export the instance to the Amazon S3 bucket. You can then download and launch the exported VM within your on- premises virtualization infrastructure.

[email protected]

aws ec2 create-instance-export-task --description "Ubuntu Denny export test" --instance-id i-20fd89ed --target-environment vmware --export-to-s3-task DiskImageFormat=vmdk,ContainerFormat=ova,S3Bucket=denny-s3-bucket,S3Prefix=Export ** DONE [email protected] must have WRITE and READ_ACL permission on the S3 bucket. CLOSED: [2015-04-06 Mon 15:09] A client error (AuthFailure) occurred when calling the CreateInstanceExportTask operation: [email protected] must have WRITE and READ_ACL permission on the S3 bucket.

aws ec2 create-instance-export-task --description "Ubuntu Denny export test" --instance-id i-20fd89ed --target-environment vmware --export-to-s3-task DiskImageFormat=vmdk,ContainerFormat=ova,S3Bucket=denny-s3-bucket,S3Prefix=Export ** TODO [#A] Only imported instances can be exported. #+BEGIN_EXAMPLE macs-MacBook-Air:org_data mac$ aws ec2 create-instance-export-task --description "Ubuntu Denny export test" --instance-id i-20fd89ed --target-environment vmware --export-to-s3-task DiskImageFormat=vmdk,ContainerFormat=ova,S3Bucket=denny-s3-bucket,S3Prefix=Export

A client error (NotExportable) occurred when calling the CreateInstanceExportTask operation: Only imported instances can be exported. #+END_EXAMPLE http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/VMImportPrerequisites.html

http://www.virtuallyghetto.com/2013/05/exporting-amazon-ec2-instance-to-run-on.html

Known Limitations for Exporting a VM from Amazon EC2

Exporting instances and volumes is subject to the following limitations:

You cannot export Amazon Elastic Block Store (Amazon EBS) data volumes.

You cannot export an instance that has more than one virtual disk.

You cannot export an instance that has more than one network interface.

You cannot export an instance from Amazon EC2 unless you previously imported it into Amazon EC2 from another virtualization environment.

  • Amazon Directory Service: Managed Directories in the Cloud :noexport:
  • Amazon SWF: Workflow Service for Coordinating Application Components :noexport:
  • Amazon SWF helps developers build, run, and scale background jobs that have parallel or sequential steps.

https://www.youtube.com/watch?v=y7Mff1ceypo Balan Subramanian Amazon Simple Workflow

  • SWF: coordinate work across distributed components

Scalable, Reliable and Auditable

| Name | Summary | |-----------------------+---------------------------------------------| | Programming Framework | Convenient libraries | | APIs | HTTPS, task processing, instructions | | Control engine | Manage, state, route tasks, distribute load |

SWF concepts | Name | Summary | |---------------------+---------------------------------------------------------------------------------| | Domain | controls the workflow's scope. | | workflow (deciders) | Control logic for workflows: execution orders, retry policies, timer logic, etc | | activity workers | Discrete steps in your application, processors |

Amazon Simple Workflow Service (Amazon SWF) Amazon SWF helps developers build, run, and scale background jobs that have parallel or sequential steps. You can think of SWF as a fully-managed state tracker and task coordinator in the cloud. If your application's steps take more than 500 milliseconds to complete, you need to track the state of processing. If you need to recover, or retry if a task fails, Amazon SWF can help you.

Amazon SWF promotes a separation between the control flow of your background job's stepwise logic and the actual units of work that contain your unique business logic. This allows you to separately manage, maintain, and scale the state machinery of your application from the core business logic that differentiates it. As your business requirements change, you can easily change application logic without having to worry about the underlying state machinery, task dispatch, and flow control.

Amazon SWF runs within Amazon's high-availability data centers, so the state tracking and task processing engine is available whenever applications need them. Amazon SWF redundantly stores the tasks, reliably dispatches them to application components, tracks their progress, and keeps their latest state.

Amazon SWF replaces the complexity of custom-coded workflow solutions and process automation software with a fully managed cloud workflow web service. This eliminates the need for developers to manage the infrastructure plumbing of process automation and allows them to focus their energy on the unique functionality of their application.

Amazon SWF seamlessly scales with your application's usage. No manual administration of the workflow service is required when you add more cloud workflows to your application or increase the complexity of your workflows.

Amazon SWF lets you write your application components and coordination logic in any programming language and run them in the cloud or on-premises.

  • Amazon AppStream: Low Latency Application Streaming :noexport: With AppStream, your applications are no longer constrained by the hardware in your customers' hands.

Amazon AppStream includes a SDK that currently supports streaming applications from Microsoft Windows Server 2008 R2 to devices running FireOS, Android, Chrome, iOS, Mac OS X, and Microsoft Windows. ** question You are setting a media server for a client and one of the services that you decide to use is Amazon AppStream. The client asks you what variables are used to measure the streaming quality for the end user. Which of the following is a wrong answer to that question?

The synchronization between audio and video (A/V sync). The round-trip latency between user inputs and rendered outputs Number of Frames per Second(FPS) The video fidelity

C Amazon AppStream is a flexible, low-latency service that lets you stream resource intensive applications and games from the cloud. It deploys and renders your application on AWS infrastructure and streams the output to mass-market devices, such as personal computers, tablets, and mobile phones. Because your application is running in the cloud, it can scale to handle vast computational and storage needs, regardless of the devices your customers are using. There are four main variables to consider when measuring the streaming quality for your end user: The video fidelity The audio fidelity The round-trip latency between user inputs and rendered outputs The synchronization between audio and video (A/V sync). Each of these quality variables are affected by conditions of the streaming ecosystem. http://aws.amazon.com/appstream/faqs/ ** Amazon AppStream question You have recently set up an application server for a client using the Amazon AppStream web service. After some firewall changes over night your client complains that when he connects to his streaming application, he gets the error, "Timed out connecting." What is the most likely reason for this problem?

Your firewall must allow traffic through TCP port 80 for Amazon AppStream to work. The app must be the problem as Amazon AppStream does not use a firewall. Your firewall must allow traffic through TCP port 8080 for Amazon AppStream to work. Your firewall must allow traffic through TCP port 21 for Amazon AppStream to work.

A The Amazon AppStream web service deploys your application on Amazon Web Services (AWS) infrastructure and streams input and output between your application and devices such as personal computers, tablets, and mobile phones. Your firewall must allow traffic through TCP port 80 for Amazon AppStream to work. http://docs.aws.amazon.com/appstream/latest/developerguide/appstream-troubleshoot-streaming.html

  • Amazon Elastic Transcoder: Easy-to-use Scalable Media Transcoding :noexport:
  • AWS Cognito: User Identity and App Data Synchronization :noexport: https://www.youtube.com/watch?v=abTy-Yyo6lI Introduction to Amazon Cognito

Amazon Cognito is designed for mobile developers who want to authenticate users with popular public identity providers or support unauthenticated guest users and use the AWS Cloud to save and sync user data for their mobile apps.

Manage user profile for mobile apps

synchronize app data for your users across their mobile devices

create unique identifiers for your users

  • Amazon Cognito does not receive or store user credentials
  • It only store the token received from the Identity Provider is stored by Amazon Cognito.

| Name | Summary | |----------------------------+------------------------------------------------------------------------------------| | Identity pools | | | Identity provider (IdP) | | |----------------------------+------------------------------------------------------------------------------------| | unauthenticated users | | | a silent push notification | a push message received by your app on a user's device that won't be seen by user. | | a sync operation | | ** Q: Is data saved directly to the Amazon Cognito sync store? http://aws.amazon.com/cognito/faqs/ No. The optional AWS Mobile SDK saves your data to an SQLite database on the local device, this way the data is always accessible to your app. The data is pushed to the Amazon Cognito sync store by calling the synchronize() method and, if push synchronization is enabled, all other devices linked to an identity are notified of the data change in the sync store via Amazon SNS. ** Q: How do I use push synchronization? http://aws.amazon.com/cognito/faqs/ Q: How do I use push synchronization?

To enable push synchronization you need to declare a platform application using the Amazon SNS page in the AWS Management Console. Then, from the identity pool page in the Amazon Cognito page of the AWS Management Console, you can link the SNS platform application to your Cognito identity pool. Amazon Cognito automatically utilizes the SNS platform application to notify devices of changes.

  • TODO Fail to install awscli :noexport: Requirement already satisfied: pyasn1>=0.1.3 in /Library/Python/2.7/site-packages (from rsa<=3.5.0,>=3.1.2->awscli) Collecting futures<4.0.0,>=2.2.0; python_version == "2.6" or python_version == "2.7" (from s3transfer<0.2.0,>=0.1.12->awscli) Downloading futures-3.2.0-py2-none-any.whl Collecting six>=1.5 (from python-dateutil<3.0.0,>=2.1->botocore==1.8.9->awscli) Downloading six-1.11.0-py2.py3-none-any.whl Installing collected packages: six, python-dateutil, botocore, rsa, PyYAML, futures, s3transfer, awscli Found existing installation: six 1.4.1 DEPRECATION: Uninstalling a distutils installed project (six) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project. Uninstalling six-1.4.1: Exception: Traceback (most recent call last): File "/Library/Python/2.7/site-packages/pip-9.0.1-py2.7.egg/pip/basecommand.py", line 215, in main status = self.run(options, args) File "/Library/Python/2.7/site-packages/pip-9.0.1-py2.7.egg/pip/commands/install.py", line 342, in run prefix=options.prefix_path, File "/Library/Python/2.7/site-packages/pip-9.0.1-py2.7.egg/pip/req/req_set.py", line 778, in install requirement.uninstall(auto_confirm=True) File "/Library/Python/2.7/site-packages/pip-9.0.1-py2.7.egg/pip/req/req_install.py", line 754, in uninstall paths_to_remove.remove(auto_confirm) File "/Library/Python/2.7/site-packages/pip-9.0.1-py2.7.egg/pip/req/req_uninstall.py", line 115, in remove renames(path, new_path) File "/Library/Python/2.7/site-packages/pip-9.0.1-py2.7.egg/pip/utils/init.py", line 267, in renames shutil.move(old, new) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 302, in move copy2(src, real_dst) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 131, in copy2 copystat(src, dst) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 103, in copystat os.chflags(dst, st.st_flags) OSError: [Errno 1] Operation not permitted: '/tmp/pip-fd1lrZ-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info'
  • --8<-------------------------- separator ------------------------>8-- :noexport:

  • TODO [#A] AWS VM DDoS other nodes :noexport: ** mail: Re: FW: Your Amazon EC2 Abuse Report [11685211434] :noexport: [[gnus:osc-colleague#CAHpjzfjWHaw%2BfWddyw1n-Nag5vFb9AouoqUBm83fM_B1P%[email protected]][Email from Garret Grajek (Fri, 13 May 2016 17:44:32 -0700): Re: FW: Your Amazon EC2 Abuse ]] #+begin_example From: Garret Grajek [email protected] Subject: Re: FW: Your Amazon EC2 Abuse Report [11685211434] To: Julian [email protected] Cc: Garret Grajek [email protected], Denny Zhang [email protected] Date: Sat, 14 May 2016 08:44:32 +0800 Reply-To: [email protected]

Beats me

Garret Grajek, CISSP 714.658.0765 (Sent from mobile)

On May 13, 2016 5:35 PM, "Julian" [email protected] wrote:

What is this EC2 instance for? 

On Fri, May 13, 2016 at 8:22 PM, Garret Grajek <[email protected]> wrote:

     

     

    ----

    Garret Grajek, CISSP | OSC

    CEO | +1.949.505.9703| [email protected]

    Schedule Me:https://goo.gl/c2OOvQ

     

    From: Amazon EC2 Abuse [mailto:[email protected]]
    Sent: Friday, May 13, 2016 4:56 PM
    To: [email protected]
    Subject: Your Amazon EC2 Abuse Report [11685211434]

     

    Amazon Web Services

         Hello,

         You have outstanding reports against your EC2 instance(s) and we are notifying
         you that we have investigated and observed abusive activity. We last contacted
         you about this on May 8th and have not received a reply. Details of the
         implicated instance(s) are below:

         Reported activity: DoS
         Instance ID: i-a6c39f28

         Please review these reports and respond with details of the action(s) you have
         taken to stop the abusive activity. If you do not consider the activity detailed
         in these reports to be abusive, please let us know why. The original reports are
         included at the end of this email for your convenience.

         Please note that your reply is required within 48 hours. According to the terms
         of the AWS Customer Agreement (http://aws.amazon.com/agreement/), if your
         instances continue to violate AWS's Acceptable Use Policy (
         http://aws.amazon.com/aup/), we may take action against your resources or account
         to stop the abusive activity, including suspension or termination of your AWS
         account.

         Please remember that you are responsible for ensuring that your instances and all
         applications are properly secured.

         Regards,
         AWS Abuse

         ---Original Report---

         Email Subject
         ========================================

         Your Amazon EC2 Abuse Report [11685211434-1]

         Log Extract
         ========================================

         ---------------------------
         AWS Account: 309775215851
         Report begin time: 08-May-2016 18:50:58 UTC
         Report end time: 08-May-2016 18:51:58 UTC

         Protocol: TCP
         Remote IP: 192.230.88.129
         Remote port(s): 80

         Total bytes sent: 227948920
         Total packets sent: 225692
         Total bytes received: 0
         Total packets received: 0

         ---------------------------------------------------------------------------------

         How can I contact a member of the Amazon EC2 abuse team or abuse reporter?
         Reply this email with the original subject line.

         Amazon Web Services

         Amazon Web Services LLC is a subsidiary of Amazon.com, Inc. Amazon.com is a
         registered trademark of Amazon.com, Inc. This message produced and distributed by
         Amazon Web Services, LLC, 410 Terry Avenue North, Seattle, WA 98109-5210.

    [r]

#+end_example

  • TODO [#A] AWS service performance problem :noexport: ** TODO AWS CodeCommit: git clone and git pull is too slow #+BEGIN_EXAMPLE Keef Baker [7 hours ago] Until they implement pull requests code commit is off my radar

Denny Zhang [7 hours ago] Yes, you're right. No PRs

Denny Zhang [16 minutes ago] @Keef Baker

Also just noticed, two things about AWS CodeCommit

  1. It doesn't support the concept of git deploy key
  2. Run git clone in one Linode VM, it's super slow. Previously the git repo is in Bitbucket. It's much faster.

Copy @Kim Kao (edited)

Keef Baker [1 minute ago] Yeah I think it's got a long way to go

Denny Zhang [1 minute ago] The Git pull is too slow #+END_EXAMPLE

  • TODO [#A] AWS Service feature limitation :noexport:
  • TODO [#A] AWS service cost problem :noexport:
  • TODO AWS Platform problems :noexport: ** Lose some flexibility Make your previous simple things complicated

blog: http/https

  • Try AWS machine learning :noexport:
  • TODO [#A] read AWS re-invent: https://aws.amazon.com/new/reinvent/ :noexport:
  • TODO Learn AWS ES best practice: https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-managedomains-dedicatedmasternodes.html :noexport:
  • TODO local notes :noexport: ** TODO AWS: https://aws.amazon.com/quickstart/architecture/vpc/ :noexport: ** TODO AWS tool: http://www.conceptdraw.com/solution-park/computer-networks-aws :noexport: ** TODO AWS::CloudFormation::Init configSets use wget :noexport: ** TODO In AWS techstack, why I need boto tool :noexport: ** TODO AWS Architecture tool: https://cloudcraft.co/app https://aws.amazon.com/architecture/icons/ ** TODO google images: AWS Architecture Diagrams: https://www.google.com/imgres?imgurl=http%3A%2F%2Fwww.conceptdraw.com%2Fsolution-park%2Fresource%2Fimages%2Fsolutions%2F_aws_simple_icons%2FComputer-and-Networks-AWS-Architecture-Diagram-3-Tier-Auto-scalable-Web-Application-Architecture.png&imgrefurl=http%3A%2F%2Fwww.conceptdraw.com%2Fsolution-park%2Fcomputer-networks-aws&docid=lQ38YmrhIWwj4M&tbnid=0HCglw8t9PSKPM%3A&vet=10ahUKEwjizJi6_7zXAhUp6oMKHV2fBkoQMwhBKAMwAw..i&w=1040&h=992&bih=466&biw=822&q=aws%20solution%20diagram&ved=0ahUKEwjizJi6_7zXAhUp6oMKHV2fBkoQMwhBKAMwAw&iact=mrc&uact=8 ** TODO cleanup AWS DNS ** TODO AWS CodePipeline VS Elastic Beanstalk? ** TODO Enable AWS billing alerts ** TODO AWS CodeStar VS Elastic Beanstalk VS CodeDeploy VS CodePipeline ** TODO Service Catalog VS AWS Config ** TODO AWS Config AWS Config provides an inventory of your AWS resources and a history of configuration changes to these resources. You can use AWS Config to define rules that evaluate these configurations for compliance. ** TODO How Does AWS IoT works ** TODO review AWS bills and identity the waste ** TODO Backup critical data to AWS S3, and keep latest 3 versions ** TODO Try LDAP for AWS and Azure ** TODO AWS lambda serverless ** TODO AWS step functions AWS Step Functions makes it easy to coordinate the components of distributed applications as a series of steps in a visual workflow. You can quickly build and run state machines to execute the steps of your application in a reliable and scalable fashion. ** TODO [#A] IAM role: tasks can use it to make API requests to authorized AWS services. ** TODO AWS IAM create a full access user, thus we don't need to login as root ** AWS SNS ** AWS cloudwatch *** AWS cloudwatch pricing https://aws.amazon.com/cloudwatch/pricing/ ** TODO read AWS top10 security pdf ** TODO Try AWS DevOps services: https://devops.com/aws-extends-devops-ambitions/ ** TODO netflix security_monkey: monitors your AWS and GCP accounts for policy changes and alerts on insecure configurations. https://github.com/Netflix/security_monkey ** TODO AWS MFA: secure my ssh login *** What's the cost? ** TODO AWS Step functions #+BEGIN_EXAMPLE Puneeth [10 minutes ago] Half of our infra is serverless. Makes sense when your needs are more event driven and short. We use them heavily in step functions

Puneeth [9 minutes ago] I think yesterday, they released a new feature in step functions which links directly to logs and lambda functions from an execution

Denny Zhang [6 minutes ago] Nice. I noticed step functions. Haven't got the time to check it though.

@Puneeth what's your understanding of AWS step functions?

Puneeth [1 minute ago] We use it extensively in production now. It made our serverless architecture lot simpler. No more tons of sqs to orchestrate lambda. One missing feature in step functions is the missing versioning . So I basically hacked terraform to create a new state machine when the definition changes.

Puneeth [< 1 minute ago] Our use case. When we queue a flight ticket for ticketing. There is an actual person somewhere who dequeues and issues a flight ticket. It might take an hour or two or sometimes 6 hours. That's where step functions step in.

Denny Zhang [< 1 minute ago] Thanks! Will pay attention to your highlights, when I try AWS step functions. #+END_EXAMPLE ** TODO AWS session manager ** TODO How does AWS OpsWorks look in the market https://aws.amazon.com/opsworks/ ** TODO AWS when UserData has failed, why deployment won't fail ** TODO Using Let's Encrypt with AWS ELB https://certbot.eff.org/docs/challenges.html?highlight=http%20challenge https://community.letsencrypt.org/t/using-lets-encrypt-with-aws-elb/34632

https://blog.cloudinvaders.com/installing-a-lets-encrypt-certificate-on-an-elastic-load-balancer/ https://gist.github.com/tobiasmcnulty/f1465b124e34a9dd6872a2f23e314a83

Digital certificates can only be issued to people who are entitled to them.

https://community.letsencrypt.org/t/using-lets-encrypt-with-aws-elb/34632/9 ** TODO AWS cloudformation: avoid rollback ** TODO How to test cloudformation locally without incuring AWS Bills? ** TODO couchbase enable email sending ** TODO couchbase rebalancing has failed #+BEGIN_EXAMPLE

Event Module Code Server Node Time Rebalance exited with reason {{badmatch,false}, [{ns_vbucket_mover,handle_info,2, [{file,"src/ns_vbucket_mover.erl"}, {line,171}]}, {gen_server,handle_msg,5, [{file,"gen_server.erl"},{line,604}]}, {proc_lib,init_p_do_apply,3, [{file,"proc_lib.erl"},{line,239}]}]} ns_orchestrator 002 [email protected] 9:36:28 PM Thu Jan 11, 2018 Haven't heard from a higher priority node or a master, so I'm taking over. mb_master 000 [email protected] 9:30:41 PM Thu Jan 11, 2018 Bucket "mdm-master" loaded on node '[email protected]' in 0 seconds. ns_memcached 000 [email protected] 9:13:19 PM Thu Jan 11, 2018 Shutting down bucket "mdm-master" on '[email protected]' for server shutdown ns_memcached 000 [email protected] 9:13:15 PM Thu Jan 11, 2018 Bucket "mdm-master" rebalance does not seem to be swap rebalance ns_vbucket_mover 000 [email protected] 9:09:00 PM Thu Jan 11, 2018 Started rebalancing bucket mdm-master ns_rebalancer 000 [email protected] 9:08:58 PM Thu Jan 11, 2018 Bucket "mdm-staging" rebalance appears to be swap rebalance ns_vbucket_mover 000 [email protected] 9:08:58 PM Thu Jan 11, 2018 Started rebalancing bucket mdm-staging ns_rebalancer 000 [email protected] 9:08:58 PM Thu Jan 11, 2018 Bucket "mdm-session" rebalance appears to be swap rebalance ns_vbucket_mover 000 [email protected] 9:08:58 PM Thu Jan 11, 2018 Started rebalancing bucket mdm-session ns_rebalancer 000 [email protected] 9:08:57 PM Thu Jan 11, 2018 Bucket "mdm-config" rebalance appears to be swap rebalance ns_vbucket_mover 000 [email protected] 9:08:57 PM Thu Jan 11, 2018 Started rebalancing bucket mdm-config ns_rebalancer 000 [email protected] 9:08:55 PM Thu Jan 11, 2018 Starting rebalance, KeepNodes = ['[email protected]','[email protected]', '[email protected]','[email protected]', '[email protected]','[email protected]', '[email protected]','[email protected]', '[email protected]','[email protected]', '[email protected]','[email protected]', '[email protected]','[email protected]', '[email protected]','[email protected]', '[email protected]','[email protected]', '[email protected]','[email protected]', '[email protected]','[email protected]', '[email protected]','[email protected]', '[email protected]','[email protected]', ... show ns_orchestrator 004 [email protected] 9:08:55 PM Thu Jan 11, 2018 #+END_EXAMPLE ** TODO couchbase cluster reboot procedure ** HALF couchbase-server autostart issue: why it doesn't show the status correctly? ** TODO Fail to stop one couchbase instance ** TODO How to restart couchbase cluster? ** TODO [#A] couchbase rebalancing improvement issue ** TODO We have to add more Couchbase nodes, just because of disk. It's not cost-effective ** TODO Question: We add new Couchbase nodes, just because of low disk ** TODO why not couchbase emails alerts, when cb-13 was down ** TODO couchbase email notification doesn't work ** TODO [#A] couchbase backup take more than 1 week ** TODO couchbase exporter https://pypi.org/project/prometheus-couchbase-exporter/

https://github.com/Zumata/couchbase_exporter ** TODO [#A] Deep dive into couchbase and elasticsearch system design https://developer.couchbase.com/documentation/server/current/introduction/intro.html https://developer.couchbase.com/documentation/server/4.5/connectors/elasticsearch-2.1/doc-design-elastic.html https://developer.couchbase.com/documentation/server/current/architecture/high-availability-replication-architecture.html https://developer.couchbase.com/documentation/server/current/architecture/architecture-intro.html https://developer.couchbase.com/documentation/server/current/architecture/core-data-access-buckets.html

  • TODO [#A] cheatsheet https://www.expeditedssl.com/aws-in-plain-english :noexport:
  • TODO RDS download snapshot to local https://stackoverflow.com/questions/14916899/download-rds-snapshot
  • --8<-------------------------- separator ------------------------>8-- :noexport:

  • DONE How do I delete a Route 53 hosted zone created by service discovery? :noexport: CLOSED: [2020-01-12 Sun 15:00] https://aws.amazon.com/premiumsupport/knowledge-center/route-53-service-discovery-delete-zone/
  • DONE allow users to check bill: IAM policy and enable federated user :noexport: CLOSED: [2020-01-22 Wed 14:49]
  • --8<-------------------------- separator ------------------------>8-- :noexport:

  • TODO AWS Fargate :noexport:
  • TODO What does it means that one AWS security group can allow traffic from another traffic group? :noexport:
  • TODO [#A] AWS DDoS :noexport:
  • TODO Setup https://aws.dennyzhang.com :noexport:
  • DONE git push ~/.../..._blog :noexport: CLOSED: [2020-06-21 Sun 15:57] AWS -> My Acount -> My security credentials -> AWS CodeCommit credentials