<!--{{{-->
<link rel='alternate' type='application/rss+xml' title='RSS' href='index.xml' />
<!--}}}-->
Background: #fff
Foreground: #000
PrimaryPale: #8cf
PrimaryLight: #18f
PrimaryMid: #04b
PrimaryDark: #014
SecondaryPale: #ffc
SecondaryLight: #fe8
SecondaryMid: #db4
SecondaryDark: #841
TertiaryPale: #eee
TertiaryLight: #ccc
TertiaryMid: #999
TertiaryDark: #666
Error: #f88
/*{{{*/
body {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}

a {color:[[ColorPalette::PrimaryMid]];}
a:hover {background-color:[[ColorPalette::PrimaryMid]]; color:[[ColorPalette::Background]];}
a img {border:0;}

h1,h2,h3,h4,h5,h6 {color:[[ColorPalette::SecondaryDark]]; background:transparent;}
h1 {border-bottom:2px solid [[ColorPalette::TertiaryLight]];}
h2,h3 {border-bottom:1px solid [[ColorPalette::TertiaryLight]];}

.button {color:[[ColorPalette::PrimaryDark]]; border:1px solid [[ColorPalette::Background]];}
.button:hover {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::SecondaryLight]]; border-color:[[ColorPalette::SecondaryMid]];}
.button:active {color:[[ColorPalette::Background]]; background:[[ColorPalette::SecondaryMid]]; border:1px solid [[ColorPalette::SecondaryDark]];}

.header {background:[[ColorPalette::PrimaryMid]];}
.headerShadow {color:[[ColorPalette::Foreground]];}
.headerShadow a {font-weight:normal; color:[[ColorPalette::Foreground]];}
.headerForeground {color:[[ColorPalette::Background]];}
.headerForeground a {font-weight:normal; color:[[ColorPalette::PrimaryPale]];}

.tabSelected {color:[[ColorPalette::PrimaryDark]];
	background:[[ColorPalette::TertiaryPale]];
	border-left:1px solid [[ColorPalette::TertiaryLight]];
	border-top:1px solid [[ColorPalette::TertiaryLight]];
	border-right:1px solid [[ColorPalette::TertiaryLight]];
}
.tabUnselected {color:[[ColorPalette::Background]]; background:[[ColorPalette::TertiaryMid]];}
.tabContents {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::TertiaryPale]]; border:1px solid [[ColorPalette::TertiaryLight]];}
.tabContents .button {border:0;}

#sidebar {}
#sidebarOptions input {border:1px solid [[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel {background:[[ColorPalette::PrimaryPale]];}
#sidebarOptions .sliderPanel a {border:none;color:[[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel a:hover {color:[[ColorPalette::Background]]; background:[[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel a:active {color:[[ColorPalette::PrimaryMid]]; background:[[ColorPalette::Background]];}

.wizard {background:[[ColorPalette::PrimaryPale]]; border:1px solid [[ColorPalette::PrimaryMid]];}
.wizard h1 {color:[[ColorPalette::PrimaryDark]]; border:none;}
.wizard h2 {color:[[ColorPalette::Foreground]]; border:none;}
.wizardStep {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];
	border:1px solid [[ColorPalette::PrimaryMid]];}
.wizardStep.wizardStepDone {background:[[ColorPalette::TertiaryLight]];}
.wizardFooter {background:[[ColorPalette::PrimaryPale]];}
.wizardFooter .status {background:[[ColorPalette::PrimaryDark]]; color:[[ColorPalette::Background]];}
.wizard .button {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::SecondaryLight]]; border: 1px solid;
	border-color:[[ColorPalette::SecondaryPale]] [[ColorPalette::SecondaryDark]] [[ColorPalette::SecondaryDark]] [[ColorPalette::SecondaryPale]];}
.wizard .button:hover {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::Background]];}
.wizard .button:active {color:[[ColorPalette::Background]]; background:[[ColorPalette::Foreground]]; border: 1px solid;
	border-color:[[ColorPalette::PrimaryDark]] [[ColorPalette::PrimaryPale]] [[ColorPalette::PrimaryPale]] [[ColorPalette::PrimaryDark]];}

.wizard .notChanged {background:transparent;}
.wizard .changedLocally {background:#80ff80;}
.wizard .changedServer {background:#8080ff;}
.wizard .changedBoth {background:#ff8080;}
.wizard .notFound {background:#ffff80;}
.wizard .putToServer {background:#ff80ff;}
.wizard .gotFromServer {background:#80ffff;}

#messageArea {border:1px solid [[ColorPalette::SecondaryMid]]; background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]];}
#messageArea .button {color:[[ColorPalette::PrimaryMid]]; background:[[ColorPalette::SecondaryPale]]; border:none;}

.popupTiddler {background:[[ColorPalette::TertiaryPale]]; border:2px solid [[ColorPalette::TertiaryMid]];}

.popup {background:[[ColorPalette::TertiaryPale]]; color:[[ColorPalette::TertiaryDark]]; border-left:1px solid [[ColorPalette::TertiaryMid]]; border-top:1px solid [[ColorPalette::TertiaryMid]]; border-right:2px solid [[ColorPalette::TertiaryDark]]; border-bottom:2px solid [[ColorPalette::TertiaryDark]];}
.popup hr {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::PrimaryDark]]; border-bottom:1px;}
.popup li.disabled {color:[[ColorPalette::TertiaryMid]];}
.popup li a, .popup li a:visited {color:[[ColorPalette::Foreground]]; border: none;}
.popup li a:hover {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; border: none;}
.popup li a:active {background:[[ColorPalette::SecondaryPale]]; color:[[ColorPalette::Foreground]]; border: none;}
.popupHighlight {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}
.listBreak div {border-bottom:1px solid [[ColorPalette::TertiaryDark]];}

.tiddler .defaultCommand {font-weight:bold;}

.shadow .title {color:[[ColorPalette::TertiaryDark]];}

.title {color:[[ColorPalette::SecondaryDark]];}
.subtitle {color:[[ColorPalette::TertiaryDark]];}

.toolbar {color:[[ColorPalette::PrimaryMid]];}
.toolbar a {color:[[ColorPalette::TertiaryLight]];}
.selected .toolbar a {color:[[ColorPalette::TertiaryMid]];}
.selected .toolbar a:hover {color:[[ColorPalette::Foreground]];}

.tagging, .tagged {border:1px solid [[ColorPalette::TertiaryPale]]; background-color:[[ColorPalette::TertiaryPale]];}
.selected .tagging, .selected .tagged {background-color:[[ColorPalette::TertiaryLight]]; border:1px solid [[ColorPalette::TertiaryMid]];}
.tagging .listTitle, .tagged .listTitle {color:[[ColorPalette::PrimaryDark]];}
.tagging .button, .tagged .button {border:none;}

.footer {color:[[ColorPalette::TertiaryLight]];}
.selected .footer {color:[[ColorPalette::TertiaryMid]];}

.error, .errorButton {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::Error]];}
.warning {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::SecondaryPale]];}
.lowlight {background:[[ColorPalette::TertiaryLight]];}

.zoomer {background:none; color:[[ColorPalette::TertiaryMid]]; border:3px solid [[ColorPalette::TertiaryMid]];}

.imageLink, #displayArea .imageLink {background:transparent;}

.annotation {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; border:2px solid [[ColorPalette::SecondaryMid]];}

.viewer .listTitle {list-style-type:none; margin-left:-2em;}
.viewer .button {border:1px solid [[ColorPalette::SecondaryMid]];}
.viewer blockquote {border-left:3px solid [[ColorPalette::TertiaryDark]];}

.viewer table, table.twtable {border:2px solid [[ColorPalette::TertiaryDark]];}
.viewer th, .viewer thead td, .twtable th, .twtable thead td {background:[[ColorPalette::SecondaryMid]]; border:1px solid [[ColorPalette::TertiaryDark]]; color:[[ColorPalette::Background]];}
.viewer td, .viewer tr, .twtable td, .twtable tr {border:1px solid [[ColorPalette::TertiaryDark]];}

.viewer pre {border:1px solid [[ColorPalette::SecondaryLight]]; background:[[ColorPalette::SecondaryPale]];}
.viewer code {color:[[ColorPalette::SecondaryDark]];}
.viewer hr {border:0; border-top:dashed 1px [[ColorPalette::TertiaryDark]]; color:[[ColorPalette::TertiaryDark]];}

.highlight, .marked {background:[[ColorPalette::SecondaryLight]];}

.editor input {border:1px solid [[ColorPalette::PrimaryMid]];}
.editor textarea {border:1px solid [[ColorPalette::PrimaryMid]]; width:100%;}
.editorFooter {color:[[ColorPalette::TertiaryMid]];}
.readOnly {background:[[ColorPalette::TertiaryPale]];}

#backstageArea {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::TertiaryMid]];}
#backstageArea a {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::Background]]; border:none;}
#backstageArea a:hover {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; }
#backstageArea a.backstageSelTab {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}
#backstageButton a {background:none; color:[[ColorPalette::Background]]; border:none;}
#backstageButton a:hover {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::Background]]; border:none;}
#backstagePanel {background:[[ColorPalette::Background]]; border-color: [[ColorPalette::Background]] [[ColorPalette::TertiaryDark]] [[ColorPalette::TertiaryDark]] [[ColorPalette::TertiaryDark]];}
.backstagePanelFooter .button {border:none; color:[[ColorPalette::Background]];}
.backstagePanelFooter .button:hover {color:[[ColorPalette::Foreground]];}
#backstageCloak {background:[[ColorPalette::Foreground]]; opacity:0.6; filter:alpha(opacity=60);}
/*}}}*/
/*{{{*/
* html .tiddler {height:1%;}

body {font-size:.75em; font-family:arial,helvetica; margin:0; padding:0;}

h1,h2,h3,h4,h5,h6 {font-weight:bold; text-decoration:none;}
h1,h2,h3 {padding-bottom:1px; margin-top:1.2em;margin-bottom:0.3em;}
h4,h5,h6 {margin-top:1em;}
h1 {font-size:1.35em;}
h2 {font-size:1.25em;}
h3 {font-size:1.1em;}
h4 {font-size:1em;}
h5 {font-size:.9em;}

hr {height:1px;}

a {text-decoration:none;}

dt {font-weight:bold;}

ol {list-style-type:decimal;}
ol ol {list-style-type:lower-alpha;}
ol ol ol {list-style-type:lower-roman;}
ol ol ol ol {list-style-type:decimal;}
ol ol ol ol ol {list-style-type:lower-alpha;}
ol ol ol ol ol ol {list-style-type:lower-roman;}
ol ol ol ol ol ol ol {list-style-type:decimal;}

.txtOptionInput {width:11em;}

#contentWrapper .chkOptionInput {border:0;}

.externalLink {text-decoration:underline;}

.indent {margin-left:3em;}
.outdent {margin-left:3em; text-indent:-3em;}
code.escaped {white-space:nowrap;}

.tiddlyLinkExisting {font-weight:bold;}
.tiddlyLinkNonExisting {font-style:italic;}

/* the 'a' is required for IE, otherwise it renders the whole tiddler in bold */
a.tiddlyLinkNonExisting.shadow {font-weight:bold;}

#mainMenu .tiddlyLinkExisting,
	#mainMenu .tiddlyLinkNonExisting,
	#sidebarTabs .tiddlyLinkNonExisting {font-weight:normal; font-style:normal;}
#sidebarTabs .tiddlyLinkExisting {font-weight:bold; font-style:normal;}

.header {position:relative;}
.header a:hover {background:transparent;}
.headerShadow {position:relative; padding:4.5em 0 1em 1em; left:-1px; top:-1px;}
.headerForeground {position:absolute; padding:4.5em 0 1em 1em; left:0; top:0;}

.siteTitle {font-size:3em;}
.siteSubtitle {font-size:1.2em;}

#mainMenu {position:absolute; left:0; width:10em; text-align:right; line-height:1.6em; padding:1.5em 0.5em 0.5em 0.5em; font-size:1.1em;}

#sidebar {position:absolute; right:3px; width:16em; font-size:.9em;}
#sidebarOptions {padding-top:0.3em;}
#sidebarOptions a {margin:0 0.2em; padding:0.2em 0.3em; display:block;}
#sidebarOptions input {margin:0.4em 0.5em;}
#sidebarOptions .sliderPanel {margin-left:1em; padding:0.5em; font-size:.85em;}
#sidebarOptions .sliderPanel a {font-weight:bold; display:inline; padding:0;}
#sidebarOptions .sliderPanel input {margin:0 0 0.3em 0;}
#sidebarTabs .tabContents {width:15em; overflow:hidden;}

.wizard {padding:0.1em 1em 0 2em;}
.wizard h1 {font-size:2em; font-weight:bold; background:none; padding:0; margin:0.4em 0 0.2em;}
.wizard h2 {font-size:1.2em; font-weight:bold; background:none; padding:0; margin:0.4em 0 0.2em;}
.wizardStep {padding:1em 1em 1em 1em;}
.wizard .button {margin:0.5em 0 0; font-size:1.2em;}
.wizardFooter {padding:0.8em 0.4em 0.8em 0;}
.wizardFooter .status {padding:0 0.4em; margin-left:1em;}
.wizard .button {padding:0.1em 0.2em;}

#messageArea {position:fixed; top:2em; right:0; margin:0.5em; padding:0.5em; z-index:2000; _position:absolute;}
.messageToolbar {display:block; text-align:right; padding:0.2em;}
#messageArea a {text-decoration:underline;}

.tiddlerPopupButton {padding:0.2em;}
.popupTiddler {position: absolute; z-index:300; padding:1em; margin:0;}

.popup {position:absolute; z-index:300; font-size:.9em; padding:0; list-style:none; margin:0;}
.popup .popupMessage {padding:0.4em;}
.popup hr {display:block; height:1px; width:auto; padding:0; margin:0.2em 0;}
.popup li.disabled {padding:0.4em;}
.popup li a {display:block; padding:0.4em; font-weight:normal; cursor:pointer;}
.listBreak {font-size:1px; line-height:1px;}
.listBreak div {margin:2px 0;}

.tabset {padding:1em 0 0 0.5em;}
.tab {margin:0 0 0 0.25em; padding:2px;}
.tabContents {padding:0.5em;}
.tabContents ul, .tabContents ol {margin:0; padding:0;}
.txtMainTab .tabContents li {list-style:none;}
.tabContents li.listLink { margin-left:.75em;}

#contentWrapper {display:block;}
#splashScreen {display:none;}

#displayArea {margin:1em 17em 0 14em;}

.toolbar {text-align:right; font-size:.9em;}

.tiddler {padding:1em 1em 0;}

.missing .viewer,.missing .title {font-style:italic;}

.title {font-size:1.6em; font-weight:bold;}

.missing .subtitle {display:none;}
.subtitle {font-size:1.1em;}

.tiddler .button {padding:0.2em 0.4em;}

.tagging {margin:0.5em 0.5em 0.5em 0; float:left; display:none;}
.isTag .tagging {display:block;}
.tagged {margin:0.5em; float:right;}
.tagging, .tagged {font-size:0.9em; padding:0.25em;}
.tagging ul, .tagged ul {list-style:none; margin:0.25em; padding:0;}
.tagClear {clear:both;}

.footer {font-size:.9em;}
.footer li {display:inline;}

.annotation {padding:0.5em; margin:0.5em;}

* html .viewer pre {width:99%; padding:0 0 1em 0;}
.viewer {line-height:1.4em; padding-top:0.5em;}
.viewer .button {margin:0 0.25em; padding:0 0.25em;}
.viewer blockquote {line-height:1.5em; padding-left:0.8em;margin-left:2.5em;}
.viewer ul, .viewer ol {margin-left:0.5em; padding-left:1.5em;}

.viewer table, table.twtable {border-collapse:collapse; margin:0.8em 1.0em;}
.viewer th, .viewer td, .viewer tr,.viewer caption,.twtable th, .twtable td, .twtable tr,.twtable caption {padding:3px;}
table.listView {font-size:0.85em; margin:0.8em 1.0em;}
table.listView th, table.listView td, table.listView tr {padding:0 3px 0 3px;}

.viewer pre {padding:0.5em; margin-left:0.5em; font-size:1.2em; line-height:1.4em; overflow:auto;}
.viewer code {font-size:1.2em; line-height:1.4em;}

.editor {font-size:1.1em;}
.editor input, .editor textarea {display:block; width:100%; font:inherit;}
.editorFooter {padding:0.25em 0; font-size:.9em;}
.editorFooter .button {padding-top:0; padding-bottom:0;}

.fieldsetFix {border:0; padding:0; margin:1px 0px;}

.zoomer {font-size:1.1em; position:absolute; overflow:hidden;}
.zoomer div {padding:1em;}

* html #backstage {width:99%;}
* html #backstageArea {width:99%;}
#backstageArea {display:none; position:relative; overflow: hidden; z-index:150; padding:0.3em 0.5em;}
#backstageToolbar {position:relative;}
#backstageArea a {font-weight:bold; margin-left:0.5em; padding:0.3em 0.5em;}
#backstageButton {display:none; position:absolute; z-index:175; top:0; right:0;}
#backstageButton a {padding:0.1em 0.4em; margin:0.1em;}
#backstage {position:relative; width:100%; z-index:50;}
#backstagePanel {display:none; z-index:100; position:absolute; width:90%; margin-left:3em; padding:1em;}
.backstagePanelFooter {padding-top:0.2em; float:right;}
.backstagePanelFooter a {padding:0.2em 0.4em;}
#backstageCloak {display:none; z-index:20; position:absolute; width:100%; height:100px;}

.whenBackstage {display:none;}
.backstageVisible .whenBackstage {display:block;}
/*}}}*/
/***
StyleSheet for use when a translation requires any css style changes.
This StyleSheet can be used directly by languages such as Chinese, Japanese and Korean which need larger font sizes.
***/
/*{{{*/
body {font-size:0.8em;}
#sidebarOptions {font-size:1.05em;}
#sidebarOptions a {font-style:normal;}
#sidebarOptions .sliderPanel {font-size:0.95em;}
.subtitle {font-size:0.8em;}
.viewer table.listView {font-size:0.95em;}
/*}}}*/
/*{{{*/
@media print {
#mainMenu, #sidebar, #messageArea, .toolbar, #backstageButton, #backstageArea {display: none !important;}
#displayArea {margin: 1em 1em 0em;}
noscript {display:none;} /* Fixes a feature in Firefox 1.5.0.2 where print preview displays the noscript content */
}
/*}}}*/
<!--{{{-->
<div class='header' macro='gradient vert [[ColorPalette::PrimaryLight]] [[ColorPalette::PrimaryMid]]'>
<div class='headerShadow'>
<span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>&nbsp;
<span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>
</div>
<div class='headerForeground'>
<span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>&nbsp;
<span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>
</div>
</div>
<div id='mainMenu' refresh='content' tiddler='MainMenu'></div>
<div id='sidebar'>
<div id='sidebarOptions' refresh='content' tiddler='SideBarOptions'></div>
<div id='sidebarTabs' refresh='content' force='true' tiddler='SideBarTabs'></div>
</div>
<div id='displayArea'>
<div id='messageArea'></div>
<div id='tiddlerDisplay'></div>
</div>
<!--}}}-->
<!--{{{-->
<div class='toolbar' macro='toolbar [[ToolbarCommands::ViewToolbar]]'></div>
<div class='title' macro='view title'></div>
<div class='subtitle'><span macro='view modifier link'></span>, <span macro='view modified date'></span> (<span macro='message views.wikified.createdPrompt'></span> <span macro='view created date'></span>)</div>
<div class='tagging' macro='tagging'></div>
<div class='tagged' macro='tags'></div>
<div class='viewer' macro='view text wikified'></div>
<div class='tagClear'></div>
<!--}}}-->
<!--{{{-->
<div class='toolbar' macro='toolbar [[ToolbarCommands::EditToolbar]]'></div>
<div class='title' macro='view title'></div>
<div class='editor' macro='edit title'></div>
<div macro='annotations'></div>
<div class='editor' macro='edit text'></div>
<div class='editor' macro='edit tags'></div><div class='editorFooter'><span macro='message views.editor.tagPrompt'></span><span macro='tagChooser excludeLists'></span></div>
<!--}}}-->
To get started with this blank [[TiddlyWiki]], you'll need to modify the following tiddlers:
* [[SiteTitle]] & [[SiteSubtitle]]: The title and subtitle of the site, as shown above (after saving, they will also appear in the browser title bar)
* [[MainMenu]]: The menu (usually on the left)
* [[DefaultTiddlers]]: Contains the names of the tiddlers that you want to appear when the TiddlyWiki is opened
You'll also need to enter your username for signing your edits: <<option txtUserName>>
These [[InterfaceOptions]] for customising [[TiddlyWiki]] are saved in your browser

Your username for signing your edits. Write it as a [[WikiWord]] (eg [[JoeBloggs]])

<<option txtUserName>>
<<option chkSaveBackups>> [[SaveBackups]]
<<option chkAutoSave>> [[AutoSave]]
<<option chkRegExpSearch>> [[RegExpSearch]]
<<option chkCaseSensitiveSearch>> [[CaseSensitiveSearch]]
<<option chkAnimate>> [[EnableAnimations]]

----
Also see [[AdvancedOptions]]
<<importTiddlers>>
ksaws
year#
good
Amazon EC2 A1 instances deliver significant cost savings and are ideally suited for scale-out and Arm-based workloads that are supported by the extensive Arm ecosystem. A1 instances are the first EC2 instances powered by AWS Graviton Processors that feature 64-bit Arm Neoverse cores and custom silicon designed by AWS.
''Features:''

*Custom built AWS Graviton Processor with 64-bit Arm Neoverse cores
*Support for Enhanced Networking with Up to 10 Gbps of Network bandwidth
*EBS-optimized by default
*Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
|Model	|vCPU	|Mem (GiB)	|Storage	|Network Performance (Gbps)|
|a1.medium	|1	|2	|EBS-Only       |Up to 10|
|a1.large	        |2	|4	|EBS-Only	|Up to 10|
|a1.xlarge	|4	|8	|EBS-Only       |Up to 10|
|a1.2xlarge	|8	|16	|EBS-Only	|Up to 10|
|a1.4xlarge	|16	|32  |EBS-Only	|Up to 10|
All instances have the following specs:

Custom built AWS Graviton Processor with 64-bit Arm cores
EBS Optimized
Enhanced Networking†
Use Cases:

Scale-out workloads such as web servers, containerized microservices, caching fleets, and distributed data stores, as well as development environments

Each vCPU is a thread of either an Intel Xeon core or an AMD EPYC core, except for A1 instances, T2 instances, and m3.medium.

Each vCPU on A1 instances is a core of an AWS Graviton Processor.
† AVX, AVX2, and Enhanced Networking are only available on instances launched with HVM AMIs.

* This is the default and maximum number of vCPUs available for this instance type. You can specify a custom number of vCPUs when launching this instance type. For more details on valid vCPU counts and how to start using this feature, visit the Optimize CPUs documentation page here.

** These M4 instances may launch on an Intel Xeon E5-2686 v4 (Broadwell) processor.  
[[mod1|mod1]]   [[mod2|mod2]]   [[mod3|mod3]]    [[mod4|mod4]]
[img[https://kspyhome.files.wordpress.com/2019/06/screenhunter-3094.jpg]]
!AWS Secure Initial Account Setup
!!How do I ensure I set up my AWS account securely?
AWS provides many account-level security options and tools that enable customers to meet their security objectives and implement the appropriate controls for their business functions. This document provides baseline security guidance for AWS accounts to help customers gain confidence that they have securely set up and initialized an account according to AWS best practices. For additional security guidance on managing multiple AWS accounts, see the AWS Organizations User Guide.

The following sections assume basic knowledge of AWS accounts, AWS Identity and Access Management (IAM), AWS CloudTrail, Amazon CloudWatch, AWS Config, and Amazon Simple Storage Service (Amazon S3).

!General Best Practices
When setting up access to a service provider, there are some universal security measures that are necessary in order to create a secure system:

Create a strategy to control permissions at the user level, and grant the minimum set of permissions necessary (least privilege) to complete a job role or task.
Monitor and audit your users, and regularly review privileges. Leverage AWS native security-logging capabilities and configure additional logging as necessary.
Identify which individuals should interact with the service provider regarding billing, security, and operations matters, and grant authorization accordingly.
Ensure continuous communication with the service provider, even if individuals change roles or leave the company. For example, use email distribution lists and company phone numbers rather than personal email addresses or mobile phone numbers.
For consistency across multiple accounts, use AWS CloudFormation or a configuration tool to automatically set up logging and monitoring features for new accounts upon creation.
!Security on AWS
An AWS account security baseline should include how to communicate with AWS, how to manage and control user access within the account, and how to monitor and audit user activities. The following sections describe key methods and services to help manage each of these aspects of account security.

!!Communication with AWS
When a customer creates a new AWS account, AWS captures the primary contact information that it will use for all communication about the account, unless alternate contacts are also added. AWS accounts can include alternate contacts for Billing, Operations, and Security. These contacts will receive copies of relevant notifications and serve as secondary communication points if the primary contact is unavailable. When setting up communication channels with AWS, keep the following best practices in mind:

Configure the AWS account contact information with a corporate email distribution list (e.g. aws-<org_name>@yourdomain.com) and company phone number rather than an individual user’s email address or personal cell phone.
Configure the account’s alternate contacts to point to a group rather than an individual. For example, create separate email distribution lists for billing, operations, and security and configure these as Billing, Security, and Operations contacts in each active AWS account. This ensures that multiple people will receive AWS notifications and be able to respond, even if someone is on vacation, changes roles, or leaves the company.
Sign up for an AWS support plan that aligns with your organization’s support expectations. Business and Enterprise support plans provide additional contact mechanisms (web, chat, and phone) that are especially useful when a customer needs an immediate response from AWS.
!AWS Identity and Access Management (IAM)
AWS recommends using IAM to securely control access to AWS resources. IAM is a free service that allows customers to grant granular permissions, and incorporates capabilities for multi-factor authentication (MFA), identity federation, and record logging. Create a foundational IAM strategy early on and keep the following best practices in mind:

Create a strong root account password, enable physical MFA on the root account, and create a process for storing and retrieving these credentials only when absolutely necessary. For day-to-day interaction with AWS, use IAM user credentials instead.
If you have root account access keys, remove them and use IAM roles or user access keys instead.
Ensure you have a documented process for adding and removing authorized users. Ultimately, it should fully integrate with an organization’s existing employee provisioning/de-provisioning process.
Create IAM groups that reflect organizational roles, and use managed policies to grant specific technical permissions as required.
If you have an existing identity federation provider, you can use the AWS Security Token Service to grant external identities secure access to your AWS resources without having to create IAM users.
!Logging and Auditing
AWS provides several different tools to help customers monitor their account activities and trends. AWS recommends all customers enable the following features:

Create a security email distribution list to receive security-related notifications. This will make it easier to configure and manage monitoring notifications associated with the monitoring services described below.
Create an Amazon Simple Notification Service (Amazon SNS) topic for security notifications and subscribe the security email distribution list to the topic. This will make it easier to create and manage security-related alerts.
Enable CloudTrail in all AWS Regions, which by default will capture global service events. Enable CloudTrail log file integrity validation and send logs to a central S3 bucket that your security team owns.
Configure CloudTrail integration with Amazon CloudWatch Logs and launch the provided AWS CloudFormation template to create CloudWatch alarms for security and network-related API activity.
Enable AWS Config. Use the predefined rules CLOUD_TRAIL_ENABLED and RESTRICTED_INCOMING_TRAFFIC to notify the security SNS topic if CloudTrail is disabled for the account or if someone creates insecure security group rules.
Create an S3 bucket for storing monitoring data and configure the bucket policy to allow the appropriate services (CloudTrail, AWS Config) to store AWS log and configuration data. For multiple accounts, use a single bucket to consolidate this data and restrict access appropriately.
!Billing and Cost Monitoring
AWS forecasting and budgeting services help you accurately plan and monitor your usage and spending levels. Here are steps to establish a baseline for your account:

Configure AWS usage and billing reports to get detailed information regarding trends in your account activity.
Designate an email distribution list that will receive billing notifications.
Create an SNS topic for budget notifications and subscribe to the billing email distribution list to this topic.
Create one or more budgets in your account and configure notifications if forecasted spending exceeds your budgeted usage.
Ref: https://aws.amazon.com/answers/security/aws-secure-account-setup/
C5 instances are optimized for compute-intensive workloads and deliver cost-effective high performance at a low price per compute ratio.

''Features:''

*C5 instances offer a choice of processors based on the size of the instance.
*New c5.12xl, c5.24xl, and c5.metal instances feature custom 2nd generation Intel Xeon Scalable Processors (Cascade Lake) with a sustained all core Turbo frequency of 3.6GHz and single core turbo frequency of up to 3.9GHz.
*Other C5 instance sizes will launch on the 2nd generation Intel Xeon Scalable Processors (Cascade Lake) or 1st generation Intel Xeon Platinum 8000 series (Skylake-SP) processor with a sustained all core Turbo frequency of up to 3.4GHz, and single core turbo frequency of up to 3.5 GHz.
*New larger instance size, c5.24xlarge, offers 96 vCPUs and 192 GiB of memory
*Requires HVM AMIs that include drivers for ENA and NVMe
*With C5d instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the C5 instance
*Elastic Network Adapter (ENA) provides C5 instances with up to 25 Gbps of network bandwidth and up to 14 Gbps of dedicated bandwidth to Amazon EBS.
|Model	|vCPU	|Memory (GiB)	|Instance Storage (GiB)	|Network Bandwidth (Gbps) |EBS Bandwidth (Mbps)|
|c5.large	|2	|4	|EBS-Only	|Up to 10	|Up to 3,500|
|c5.xlarge	|4	|8	|EBS-Only	|Up to 10	|Up to 3,500|
|c5.2xlarge	|8	|16	|EBS-Only	|Up to 10	|Up to 3,500|
|c5.4xlarge	|16	|32	|EBS-Only	|Up to 10	|3,500|
|c5.9xlarge	|36	|72	|EBS-Only	|10	|7,000|
|c5.12xlarge	|48	|96	|EBS-Only	|12	|7,000|
|c5.18xlarge	|72	|144	|EBS-Only	|25	|14,000|
|c5.24xlarge	|96	|192	|EBS-Only	|25	|14,000|
|c5.metal	|96	|192	|EBS-Only	|25	|14,000|
|c5d.large	|2	|4	|1 x 50 NVMe SSD	|Up to 10	|Up to 3,500|
|c5d.xlarge	|4	|8	|1 x 100 NVMe SSD	|Up to 10	|Up to 3,500|
|c5d.2xlarge	|8	|16	|1 x 200 NVMe SSD	|Up to 10	|Up to 3,500|
|c5d.4xlarge	|16	|32	|1 x 400 NVMe SSD	|Up to 10	|3,500|
|c5d.9xlarge	|36	|72	|1 x 900 NVMe SSD	|10	|7,000|
|c5d.18xlarge	|72	|144	|2 x 900 NVMe SSD	|25	|14,000|
|c5.12xl, c5.24xl, and c5.metal instances have the following specs:

Custom 2nd generation Intel Xeon Scalable Processors (Cascade Lake) with a sustained all core Turbo frequency of 3.6GHz and single core turbo frequency of up to 3.9GHz.
Intel AVX†, Intel AVX2†, Intel AVX-512, Intel Turbo
EBS Optimized
Enhanced Networking†
All other C5 instances have the following specs:

Custom 2nd generation Intel Xeon Scalable Processors (Cascade Lake) with a sustained all core Turbo frequency of 3.6GHz and single core turbo frequency of up to 3.9GHz or 1st generation Intel Xeon Platinum 8000 series (Skylake-SP) processor with a sustained all core Turbo frequency of up to 3.4GHz, and single core turbo frequency of up to 3.5 GHz.
Intel AVX†, Intel AVX2†, Intel AVX-512, Intel Turbo
EBS Optimized
Enhanced Networking†
Use Cases

High performance web servers, scientific modelling, batch processing, distributed analytics, high-performance computing (HPC), machine/deep learning inference, ad serving, highly scalable multiplayer gaming, and video encoding.
C5n instances are ideal for high compute applications (including High Performance Computing (HPC) workloads, data lakes, and network appliances such as firewalls and routers) that can take advantage of improved network throughput and packet rate performance. C5n instances offers up to 100 Gbps network bandwidth and increased memory over comparable C5 instances.
Features:

3.0 GHz Intel Xeon Platinum processors with Intel Advanced Vector Extension 512 (AVX-512) instruction set
Run each core at up to 3.5 GHz using Intel Turbo Boost Technology
Larger instance size, c5n.18xlarge, offering 72 vCPUs and 192 GiB of memory
Requires HVM AMIs that include drivers for ENA and NVMe
Network bandwidth increases to up to 100 Gbps, delivering increased performance for network intensive applications.
33% higher memory footprint compared to C5 instances
Model	vCPU*	Mem (GiB)	Storage	Dedicated EBS Bandwidth (Mbps)	Network Performance (Gbps)
c5n.large	2	5.25	EBS-Only	Up to 3,500	Up to 25
c5n.xlarge	4	10.5	EBS-Only	Up to 3,500	Up to 25
c5n.2xlarge	8	21	EBS-Only	Up to 3,500	Up to 25
c5n.4xlarge	16	42	EBS-Only	3,500	Up to 25
c5n.9xlarge	36	96	EBS-Only	7,000	50
c5n.18xlarge	72	192	EBS-Only	14,000	100
All instances have the following specs:

3.0 GHz Intel Xeon Platinum Processor
Intel AVX†, Intel AVX2†, Intel AVX-512, Intel Turbo
EBS Optimized
Enhanced Networking†
Use Cases

High performance web servers, scientific modelling, batch processing, distributed analytics, high-performance computing (HPC), machine/deep learning inference, ad serving, highly scalable multiplayer gaming, and video encoding.
!Create Ubuntu 16.04 Linux instance
Step 01:   ''login console''
[img[https://kspyhome.files.wordpress.com/2019/06/screenhunter-3110-1.jpg]]

Step02:  ''Select EC2 services''
[img[https://kspyhome.files.wordpress.com/2019/06/screenhunter-3111.jpg]]

Step03: ''Launch Instance''
[imf[https://kspyhome.files.wordpress.com/2019/06/screenhunter-3112-1.jpg]]

Step04: ''Choose Ubuntu Server 16.04 LTS ....''
[img[https://kspyhome.files.wordpress.com/2019/06/screenhunter-3114-1.jpg]]

Step05:  ''Choose an Instance Type''
[img[https://kspyhome.files.wordpress.com/2019/06/screenhunter-3116.jpg]]

!Module1
!''AWS vs Azure''

Azure obviously is very Microsoft VENDOR oriented but it is excellent for deployment of Lab or Private enterprise or production MS focused instances.

AWS is has a much broader scope and in my opinion has much better support for "container" based instances as well as linux based virtual setups.

So if I was setting up Window Server 2012 R2 VM then Azure is better but if it was a linux VM then AWS.
----------------------------------------------------------
Cloud architect with AWS , AZURE  and GOOGLE  cloud deployment models and architecture will continue to enjoy higher pay, bonuses more job opportunities
----------------------------------
When we compare the costs, there are areas in which AWS is better than Azure. However, when working out is Azure better than AWS, pricing is not the only consideration. ... On a global level, AWS also has many more servers; but again, this may not be an important consideration for all organizations
---------------------------------------
Virtual machines or virtual workloads. AWS is the only one that supports VMware.

If you were being more general, then I'd say that both platforms are pretty similar in their offerings today. Azure has certainly caught up in the past 12 months in their offerings. I'm had great experience with the support offerings with both vendors. AWS Enterprise Support (although expensive) is brilliant and more of a business partner than just a break/fix service.
---------------------------------------
[img[https://learn.itmasters.edu.au/pluginfile.php/167098/mod_forum/attachment/69195/aws%20vs%20azure%20vs%20google.PNG]]
Source: https://www.datamation.com/cloud-computing/aws-vs-azure-vs-google-cloud-comparison.html
-----------------------------------------
!AWS solution architect associate certification
*to sit for the official AWS certification https://aws.amazon.com/certification/certified-solutions-architect-associate/.
*prep for the "lower" lvl cert, AWS Certified Cloud Practitioner: https://aws.amazon.com/certification/certified-cloud-practitioner/

-------------
!aws commands
AWS CLI Commands Reference

https://docs.aws.amazon.com/cli/latest/index.html

For EC2

https://docs.aws.amazon.com/cli/latest/reference/ec2/

!price comparison between aws and azure?
You can use the following calculators to compare as needed:

https://calculator.s3.amazonaws.com/index.html

https://azure.microsoft.com/en-in/pricing/calculator/
-----------------------------------
https://dzone.com/articles/azure-cosmos-db-costs-vs-dynamo-db-and-neptune) 
---------------------------
!Cyberattack
''Amazon Web Services Customers Can Hack AWS Cloud And Steal Data, Says Oracle CTO Larry Ellison''
about Oracle’s new Generation 2 Cloud versus traditional cloud architecture such as what he said Amazon (AWS) currently uses.
https://www.forbes.com/sites/bobevans1/2018/10/26/amazon-web-services-customers-can-hack-aws-cloud-and-steal-data-says-oracle-cto-larry-ellison/#7da8999476cf

I think it is analogous to "Insider Threat", i.e. employee(s) attacking a Company from within.
--------------------------------
!Security Considerations
The ACSC has rated AWS to PROTECTED in the Australian region.  https://aws.amazon.com/compliance/irap/ has some information to consider around their certification.

The AWS Well-Architected framework (https://aws.amazon.com/architecture/well-architected/) and tool (https://aws.amazon.com/well-architected-tool/) is a good place to start when looking at how you design around AWS and keeping it secure.

AWS Summits are a good place to learn about how to architect secure AWS designs (https://aws.amazon.com/events/summits/?awsf.events-summit=summit-crosstag%23summit-apac )

---------------------------------------
MS Azure obtained their Australian Federal Protected network status before AWS, additionally Azure has three Australian data centres (two serving the public, one serving Government) to provide both Australian hosting requirements.  Additionally these Data Centres provide failover/HA while also complying with Australian data hosting requirements.

https://cloudblogs.microsoft.com/industry-blog/en-au/government/2018/04/18/azure-achieves-certification-for-protected-data-in-australia/
---------------------------------
Yes, most security breaches associated with AWS has been around exposed S3 Buckets and I have always been of the opinion that Amazon needed to do more about it which they seem to have done now according to the article below. 

https://www.zdnet.com/article/aws-rolls-out-new-security-feature-to-prevent-accidental-s3-data-leaks/
-------------------------------------

Security in the cloud is not a simple view.  If you ensure  that you stick to the NIST definitions for IaaS,PaaS and SaaS, and design your security around these definitions.  This then will show where you have accoutability / responsibility for ensuring security and the vendor / AWS has accountability for delivery of the controls.  TAG and CIS have good insights to development of your cloud security policy.  Prior to moving to the cloud, I would suggest spending some time in the creation of your cloud security patterns, such as storage, workstation etc

|Virtualisation and the Cloud|Advances in virtual computing have driven growth in virtual data centres, software defined networks and cloud services|
|Cloud Compliance|The regulatory and compliance challenges associated with public clouds will ease across all industries in the coming years|
|Cloud Security Solutions|Effective techniques do exist for securing the cloud services, including cloud access security brokers and micro-segmentation|
-------------------------------------
!EC2 comparison site
For those that haven't seen it, this site is invaluable for comparing various EC2 (and RDS) instance types. Far better than trying to use AWS's website:

https://www.ec2instances.info

Can filter by a range of parameters, plus show costs for different regions or even Reserved Instance types.
--------------------
!Getting Started with AWS pdf
https://awsdocs.s3.amazonaws.com/gettingstarted/latest/awsgsg-intro.pdf
------------------------------------
!LABS
-----------------------
!what information I get if I use the aws API?
You can get almost all the required information outside your operating system, the best place you can read about the api is documentation at

https://docs.aws.amazon.com/index.html
------------------
!How Cloud has impacted System Administrator roles overall in Australia?
--------------------------------------
!functionality or price decision for cloud vendor
---------------------------
!Using Putty
here: https://docs.aws.amazon.com/es_es/AWSEC2/latest/UserGuide/putty.html
------------------------------
 https://www.viktorious.nl/2013/01/14/putty-log-all-session-output
--------------------------------------
!Netflix and Chaos Monkey

My understanding, and from the github repository notes (https://netflix.github.io/chaosmonkey/) is that the point behind the chaos monkey (and the Simian Army) is to be able to test fault tolerance in production. 

I’m sure the infrastructure and application design goes through regular testing with the chaos monkey, but isn’t the whole point to test resilience in production?
---------------------------------------
es, you're 100% correct. Netflix use these tools to test production resilience.

https://medium.com/netflix-techblog/the-netflix-simian-army-16e57fbab116

This was our philosophy when we built Chaos Monkey, a tool that randomly disables our production instances to make sure we can survive this common type of failure without any customer impact.
------------------------------------------
!AWS Machine learning and AI
AWS has the broadest and deepest set of machine learning and AI services for your business & capabilities are built on the most comprehensive cloud platform, optimised for machine learning with high-performance compute, and no compromises on security and analytics.
---------------------
get yourself some access to colab

https://colab.research.google.com/github/tensorflow/tpu/blob/master/tools/colab/fashion_mnist.ipynb
---------------
AWS has spent a fair bit of effort on creating a ML platform that is easy to use for data scientists though, have a look at AWS SageMaker - https://docs.aws.amazon.com/sagemaker/latest/dg/whatis.html
--------------------
!T type instances
------------------------
!Vmware aws integration
What steps or considerations can we take integrate a private Vmware cloud and aws to achieve a hybrid cloud strategy
-----------------------------
''1. Security:''

Security in a hybrid cloud has to happen at the site where the transfer begins, as you'll need to encrypt data before sending it. You'll also need a secure VPN.

''2. Hypervisor usage: ''

If the hypervisor(s) used in the private cloud differ from what's being used in the public cloud, then you need efficient conversion package(s) that can be used when data and applications are moved between your private and public clouds. If your private cloud is using KVM or VMware ESX and you want to use Amazon (and the Xen hypervisor), then you'll need conversion software.

''3. Developing a hybrid cloud environment is difficult:''

There are no out-of-the-box solutions for a private cloud, and certainly none for a hybrid cloud. As of right now, you have to patch together software from a number of vendors to build private and hybrid clouds.

''4. Understand your data: ''

The ability to run applications in a private cloud during peak usage hours and then offload to the public cloud during off-peak hours affords enormous flexibility. This provides maximum use and efficiency of both internal and external resources. You need to know which data needs high security, which data must be compliant with regulatory requirements and which data you can safely farm out to public clouds.

''5. Communication between private and public clouds: ''

You need a trusted arbitrator between the private cloud and the public cloud that allows you to make decisions on what goes to the public cloud. You'll also need to monitor the public cloud's delivery of resources to make sure that they are sufficient for what you need. You will not generally know whether your applications are running in the data center downstairs or in an Amazon public cloud. The capability to know where your applications and data are physically located is important because you do not want them moving from private to public and back again too frequently.

''6. Management: ''

Hybrid clouds require greater levels of automation management to achieve higher degrees of availability, performance, and security.
-----------------------------------------
!WordPress and Moodle Deployment
--------------------------------------------
!Which EC2 instance we choose

While purchasing Ec2 Instance in aws , out of this 'which one is suitable for us ?

1) On Demand

2)Reserved

3)Spot

4)Dedicated Host.

 e.g. we have 100 users their usage in first 15 days is very less and last 15 days in a month is very high. 
----------------------------------
!Getting Started with AWS Kindle Edition

I found this PDF version online. Can anyone verify that it is indeed the same?
https://awsdocs.s3.amazonaws.com/gettingstarted/latest/awsgsg-intro.pdf
---------------
!Cloud
AliBaba Cloud

https://au.alibabacloud.com/

Digital Ocean 

https://www.digitalocean.com/

OpenStack

https://www.openstack.org/

Oracle

https://cloud.oracle.com/home

IBM Cloud

https://www.ibm.com/au-en/cloud

Yes you could build your own cloud. 
-----------------------------------
You can also check out

https://www.vultr.com/
--------------------------------------
!Networking
this course is an Associate level

according to AWS guide, here is the content

Domain 1: Design Resilient Architectures 

Domain 2: Define Performant Architectures 

Domain 3: Specify Secure Applications and Architectures 

Domain 4: Design Cost-Optimized Architectures 

Domain 5: Define Operationally Excellent Architectures 



https://d1.awsstatic.com/training-and-certification/docs-sa-assoc/AWS_Certified_Solutions_Architect_Associate_Feb_2018_%20Exam_Guide_v1.5.2.pdf
-------------------------------
!Key Pair for Instances

Is it advisable to create a Key Pair for every instance that is created and spun up from a security point of view or would this be a management nightmare of which key pair was used? What is the best practice for this in the real world?
-------------------------------

I can't say what is the best option, but I have two comments:

From an individual perspective, I use a separate key pair for each client I work for.  Seems neater from a security point of view, although it does mean that I need to use "IdentitiesOnly yes" and IdentityFile for each host in my .ssh/config file to avoid the client trying too many keys when it attempts to log into a server... some have a limit as the number of public keys a client can try.

From a corporate perspective, when administering servers created from public AMIs, I prefer to use the same keypair for creating every EC2 instance, but then use Ansible or something to provision keys for each of your users onto that server.  You can keep the private key for your "top-level" key pair secret, and your staff don't have access to it.  (Except the people doing the provisioning.)  Ansible vaults may help.  You can roll your own AMI after you created the users, but this generates security/upkeep issues.  Maybe it's better to use "instance data" when you create the instances to run a script that self-provisions the users.
--------------------------
As Alastair suggests it's a balance between host security and management. ideally every user / instance would be unique.

personally I bake my own base images with a default deployment key / profile and use ansible to add roles from there.
------------------------------
!Migrating to different AWS EC2 instance type
Recently we were assessing to migrate IBM WebSphere MQ servers from Windows to Linux (RHE). Windows servers were on m4.2xlarge. We wanted check what was the best alternative when moving to Linux. Below are some of the resources we used:
 
https://aws.amazon.com/ec2/instance-types/

https://www.ec2instances.info/

https://www.awsprices.com/

https://aws.amazon.com/ec2/faqs/
 
Network performance benchmark

https://cloudonaut.io/ec2-network-performance-cheat-sheet/

 AWS Calculator

https://calculator.s3.amazonaws.com/index.html
---------------------
!Some cloud comparisons
Some interesting stats: https://youtu.be/2NbwlUzEDLA?list=PLZgTMWtBnKkosbeuwDrcfNxLxd5tiC3ye
------------------------------------
!Developing for A1 EC2 instances
What is the closest dev board in terms of CPU (aarch64) specs? Pi? Odroid? Idea is to develop locally with testing on a A1 clustered instances. Project is mostly C++ with some C.
------------------------
Okay, found some basic info here, it might involve a lot of trial and error with the compiler flags https://en.wikichip.org/wiki/annapurna_labs/alpine/al73400
-----------------------------
!Load balancing

We have mentioned a lot about the AWS tonight, however, due to time restrictions many of them aren't covered, such as load-balancing, would you have any materials about that for us? such as a video or so.
-------------------------
https://aws.amazon.com/elasticloadbalancing/

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/load-balancer-types.html

https://hackernoon.com/what-is-amazon-elastic-load-balancer-elb-16cdcedbd485

https://exampleloadbalancer.com/
-------------------------------
!Stop vs shutdown an instance
you dont pay for stopped instances
but you do pay for:
- volumes associated - they are charged until deleted
- EIP - you are charged EIP when not on a running instance
--------------------------------------------------
!**Beware EFS does not work with Windows OS**

For those thinking of mapping or sharing EFS on a Windows OS machine. Just take note that EFS will not work for this purpose.

AWS has a feature called FSx (which is suitable for this purpose), however this is only limited to certain regions (see below)

EU (Ireland)
Asia Pacific (Sydney)
Asia Pacific (Tokyo)
US East (N. Virginia)
US East (Ohio)
US West (Oregon)
-----------------------------------
Pointing out this constraint is brilliant indeed;  Amazon is clear about it as well, as you'd see here.

https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/AmazonEFS.html
--------------------------------
!Leaky Buckets

I have read several stories of Amazon S3 buckets being "accidentally" exposed to the Internet, despite the tools, advice and best practices Amazon makes available wrt to securing cloud storage. In a lot of cases, public access may be the desired result, in many other cases, it is a companies worst nightmare.



I'd imagine that some people might say that when data particularly of the sensitive type is stored on-premise as opposed to the cloud, it likely has layers in depth security that might make it less likely to being exposed on the internet so easily. Does anyone have an a counter-argument? Are there tools or methods that Amazon could employ pro-actively that might help users to know sooner when their data is exposed?



I am not talking about data durability or availability, which IMHO I can't imagine a company having the resources to compete at scale with Amazon on those capabilities.
------------------------
!Avoid confusing S3, EFS and EBS
Amazon Web Services (AWS) is well-known for its vast number of product offerings. There are (probably) a few AWS ninjas who know exactly how and when to use which Amazon product for what. The rest of us are in need of help.

Specifically in the storage arena, AWS provides three popular services — S3, Elastic Block Store (EBS), and Elastic File System (EFS) — which work quite differently and offer different levels of performance, cost, availability, and scalability. We'll discuss the use cases of these storage options, and compare their performance, cost, and accessibility to stored data.

''AWS Storage Options: A Primer''
''Amazon S3'' provides simple object storage, useful for hosting website images and videos, data analytics, and both mobile and web applications. Object storage manages data as objects, meaning all data types are stored in their native formats. There is no hierarchy of relations between files with object storage — data objects can be distributed across several machines. You can access the S3 service from anywhere on the internet.

''AWS EBS'' provides persistent block-level data storage. Block storage stores files in multiple volumes called blocks, which act as separate hard drives; block storage devices are more flexible and offer higher performance than regular file storage. You need to mount EBS onto an Amazon EC2 instance. Use cases include business continuity, software testing, and database management.

''AWS EFS'' is a shared, elastic file storage system that grows and shrinks as you add and remove files. It offers a traditional file storage paradigm, with data organized into directories and subdirectories. EFS is useful for SaaS applications and content management systems. You can mount EFS onto several EC2 instances at the same time. 

The following diagram illustrates the difference between object storage and block storage.
[img[https://dzone.com/storage/temp/6836375-block-object-storage.png]]

Image Source: NetApp Cloud (used with permission)

''Head to Head''
The table below compares Amazon S3, EBS, and EFS in terms of performance, cost, availability, accessibility, access control, and storage or file size limits enforced by each service.

[img[https://dzone.com/storage/temp/6836382-netapp-dzone-cloud-services-table.png]]
Which AWS Cloud Storage Service Is Best? 
As always, it depends.

*Amazon S3 is cheapest for data storage alone. However, there are various other pricing parameters in S3, including cost per number of requests made, S3 Analytics, and data transfer out of S3 per gigabyte. EFS has the simplest cost structure. 

*Amazon S3 can be accessed from anywhere. AWS EBS is only available in a particular region, while you can share files between regions on multiple EFS instances.

*EBS and EFS are both faster than Amazon S3, with high IOPS and lower latency.

*EBS is scalable up or down with a single API call. Since EBS is cheaper than EFS, you can use it for database backups and other low-latency interactive applications that require consistent, predictable performance.

*EFS is best used for large quantities of data, such as large analytic workloads. Data at this scale cannot be stored on a single EC2 instance allowed in EBS—requiring users to break up data and distribute it between EBS instances. The EFS service allows concurrent access to thousands of EC2 instances, making it possible to process and analyze large amounts of data seamlessly.
--------------------------------

Article is from here:
https://dzone.com/articles/confused-by-aws-storage-options-s3-ebs-amp-efs-explained

Posting the full content here may be a copyright violation.
---------------------------------------
!Types of Cloud Storage

''Types of Cloud Storage''

There are three types of cloud data storage: object storage, file storage, and block storage. Each offers their own advantages and have their own use cases:

Object Storage - Applications developed in the cloud often take advantage of object storage's vast scalablity and metadata characteristics. Object storage solutions like Amazon Simple Storage Service (S3) are ideal for building modern applications from scratch that require scale and flexibility, and can also be used to import existing data stores for analytics, backup, or archive.

File Storage - Some applications need to access shared files and require a file system. This type of storage is often supported with a Network Attached Storage (NAS) server. File storage solutions like Amazon Elastic File System (EFS) are ideal for use cases like large content repositories, development environments, media stores, or user home directories.


Block Storage - Other enterprise applications like databases or ERP systems often require dedicated, low latency storage for each host. This is analagous to direct-attached storage (DAS) or a Storage Area Network (SAN). Block-based cloud storage solutions like Amazon Elastic Block Store (EBS) are provisioned with each virtual server and offer the ultra low latency required for high performance workloads.
--------------------------
Thanks for the information Mahesh 

But perhaps  you may like to highlight that fact that this maybe more specific to AWS. Reason being if you look at it in a broader term then the types of Cloud Storage can also be interrupted as:

Personal Cloud Storage. ...
Public Cloud Storage. ...
Private Cloud Storage. ...
Hybrid Cloud Storage.
-----------------------------------------------
!Share & Connect -LinkedIn Connections
Let's connect via LinkedIn so we can share updates and news about Tech & IT!

My profile name is same as in here;Send an invitation request that includes "AWS Architect", and I will accept your i
invitation, or share your LinkedIn profile on here!
-------------------------------------------------------------

Hi Waleed, sent you a connect via Linked In.

here are my details.

https://www.linkedin.com/in/kurt-walther-ba8614a
---------------------------------------------
https://www.linkedin.com/in/pymblesoftware/
-----------------------------------------------------------------
!Things to Consider When Choosing Your Storage Type
Couple of things needs to be carefully consider when choosing your storage options for backing up to Amazon S3 or Glacier.

Do you have other backups?
Are any of your other backup sets offsite?
How much data do you have in your home folder; on your machine in total?
What is your budget for online backups?
Amazon S3 and Glacier have different costs per GB for storage and upload but the same price for restores. It’s a little more expensive to upload to Glacier but cheaper to store the data there. It’s also important to note that there is a 3–5 hour waiting period for restores from Glacier, and a small fee for downloading the full contents of your backup. It’s up to you to bring out your favorite calculator and decide what will work for you.

If you have a large amount of data that won’t change a lot, that you are backing up to recover from a catastrophe, use Glacier. If you have a lot of data that changes frequently and that you’ll want quick restore access to, go for S3. 
------------------------------
!AWS vs. Azure vs. Google: Storage
''AWS Storage:''

SSS to EFS: AWS offers a long list of storage services that includes its Simple Storage Service (S3) for object storage, Elastic Block Storage (EBS) for persistent block storage for use with EC2, and Elastic File System (EFS) for file storage. Some of its more innovative storage products include the Storage Gateway, which enables a hybrid storage environment, and Snowball, which is a physical hardware device that organizations can use to transfer petabytes of data in situations where Internet transfer isn't practical.

Database and archiving On the database side, Amazon has a SQL-compatible database called Aurora, Relational Database Service (RDS), DynamoDB NoSQL database, ElastiCache in-memory data store, Redshift data warehouse, Neptune graph database and a Database Migration Service. Amazon offers Glacier, which is designed for long-term archival storage at very low rates. In addition, its Storage Gateway can be used to easily set up backup and archive processes.

''Azure Storage:''

Storage Services: Microsoft Azure's basic storage services include Blob Storage for REST-based object storage of unstructured data, Queue Storage for large-volume workloads, File Storage and Disk Storage. It also has a Data Lake Store, which is useful for big data applications.

Extensive Database: Azure's database options are particularly extensive. It has three SQL-based options: SQL Database, Database for MySQL and Database for PostgreSQL. It also has a Data Warehouse service, as well as Cosmos DB and Table Storage for NoSQL. Redis Cache is its in-memory service and the Server Stretch Database is its hybrid storage service designed specifically for organizations that use Microsoft SQL Server in their own data centers. Unlike AWS, Microsoft does offer an actual Backup service, as well as Site Recovery service and Archive Storage.

''Google Storage:''

Unified Storage and more: As with compute, GCP has a smaller menu of storage services available. Cloud Storage is its unified object storage service, and it also has a Persistent Disk option. It offers a Transfer Appliance similar to AWS Snowball, as well as online transfer services.

SQL and NoSQL When it comes to databases, GCP has the SQL-based Cloud SQL and a relational database called Cloud Spanner that is designed for mission-critical workloads. It also has two NoSQL options: Cloud Bigtable and Cloud Datastore. It does not have backup and archive services .
---------------------
VMWARE just released an update for SAP HANA's in memory database testing on VMWAREs VSAN using virtualised storage.

Their discussion intro also showed customers moving this solution to the cloud which was of interest .........

For anyone interested it is available here by registering with an email and then you can access the webinar.

https://onlinexperiences.com/scripts/Server.nxp?LASCmd=L:0&AI=1&ShowKey=65638&LoginType=0&InitialDisplay=1&ClientBrowser=0&DisplayItem=NULL&LangLocaleID=0&SSO=1&RFR=NULL
----------------
!Main Cloud Services providers

Is Amazon Web Services going to be the Google of cloud computing?
[img[https://cdn.geekwire.com/wp-content/uploads/2018/08/Screen-Shot-2018-08-02-at-9.26.42-AM-630x483.png]]
-----------------------
!Quiz Question 3

Hi All,

Could someone link some more information in order to answer Question 3 in the quiz, please?

I need to read some more information. Question below;

You have some large files that you would like customers to access securely. You have a web-based application that can check the customer's access to the files. Once they have been approved access, you would like the URL to expire within 1 hour.

Thank You.
----------------------------
Do read this

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-urls.html
---------------------

Read about S3 security, CORS, Static website with S3

https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/s3-example-presigned-urls.html
-----------------------------

!AWS storage types: relative speed/cost/redundancy

There are many different ways of storing data by/for compute resources inside AWS (e.g. EC2, Lambda), and in the case of S3, also programs running outside AWS or web browsers.  They mainly vary in terms of speed and complexity of access and are priced accordingly.

For more info, see Amazon S3 Storage Classes, Amazon Elastic Block Store, Amazon Elastic File System and Amazon FSx for Windows. 
[img[https://kspyhome.files.wordpress.com/2019/07/screenhunter-4179.jpg]]
Note that any given object in an S3 bucket can have a different storage class, i.e. sub-type from the above table.  Note that for S3 Standard-IA and S3 One Zone-IA, minimum costs for object size and lifetime apply.  EFS and FSx are only charged for storage used, but EBS is charged for the provisioned capacity, i.e. size of the volume (which can be expanded later, but not shrunk).  "Locked" means EBS volumes can only be mounted on EC2 instances in the AZ in which the volume was created.

''Cost comparisons''
*mega cheap: approx. $0.001 per GB per month
*super cheap: approx. $0.004 per GB per month
*very cheap: approx. $0.01 per GB per month
*slightly cheap: approx. $0.0125 per GB per month
*medium: approx. $0.025 per GB per month
*slightly expensive: approx. $0.04 per GB per month
*expensive: approx. $0.045 per GB per month
*really expensive: approx. $0.1 per GB per month
*super expensive: approx. $0.13 per GB per month
*hideously expensive: approx. $0.3 per GB per month
--------------------------
!IOPS

Can someone please explain in layman's terms how IOPS work?
----------------------------

Did you listen to my description on the webinar ?

Input Output Operations per Second



1 IOP is 1 read (or write) operation. In 1 operation you can request the contents of 1 block of a block device. 



reading a file would take several IOPS:

- the read the directory/folder - a few operations depending on how big the folder is

- a few more if there is a few levels of folders

- I block is most often 4kb , so if you read a 15kb file it would be another 4 IOP.

But the read of a second file, the directory operations are likely to be cached.

So 1000 IOPS means you can read 1000 parts of a file per second. Or you could read the first block of a 1000 files . assuming you have the folders cached in memory.

More IOPS means you can access your storage faster.
--------------------------
!Getting data back from AWS

Is there a reverse process like Snowball if you wanted to do a bulk transfer out of AWS storage?

Do all the cloud players have high speed links between themselves for creating hybrid solutions?
-----------------------------

One of the revenue maximising strategies for cloud services providers is that they don't want to make it easy to migrate away from them.  I have noticed this in a few areas, like VM images or RDS snapshots, where it's hard to get stuff out of AWS.  Typically, this would be straightforward on self-hosted or managed platforms.  In other cases, the way things (like VPC security groups) are structured inside AWS suits the nature of the platform, but doesn't translate well to industry standard concepts.

I don't blame AWS for this, because their focus is on making a full-featured, high performance, minimal cost offering and staying ahead of the competition.  I just wish they would offer an equivalent to Google Takeout, where you can get any data you have stored in the platform, even if it comes out as JSON.
-----------------------
!S3 as a fileshare
Given S3 is an object level store, when a user changes a file it will need to be uploaded from scratch to the bucket.

As opposed to EBS where a file change is delta at block level, thus is this really a practically solution for a busy corporate fileshare to reside on S3?
---------------------------
!Confused in S3 and others
--------------------------
[[win10|dwin10]]
[[win7|dwin7]]
[[common|dcom]]
!General
Longer EC2, EBS, and Storage Gateway resource IDs | Overview | Service level agreement (SLA)

Longer EC2, EBS, and Storage Gateway resource IDs
''Q: What is changing?''

Starting July 2018, all newly created EC2 resources will receive longer format IDs. The new format will only apply to newly created resources; your existing resources won’t be affected. Instances and volumes already use this ID format. Through the end of June 2018, customers will have the ability to opt-in to use longer IDs. During this time, you can choose which ID format resources are assigned and update your management tools and scripts to add support for the longer format. Please visit this documentation for instructions.

''Q: Why is this necessary?''

Given how fast AWS continues to grow, we will start to run low on IDs for certain resources in 2018. In order to enable the long-term, uninterrupted creation of new resources, we need to introduce a longer ID format. All Amazon EC2 resource IDs will change to the longer format in July 2018.

''Q: I already opted in for longer IDs last year. Why do I need to opt-in again?''

In 2016, we moved to the longer ID format for Amazon EC2 instances, reservations, volumes, and snapshots only. This opt-in changes the ID format for all remaining EC2 resource types

''Q: What will the new identifier format look like?''

The new identifier format will follow the pattern of the current identifier format, but it will be longer. The new format will be -<17 characters>, e.g. “vpc-1234567890abcdef0” for VPCs or “subnet-1234567890abcdef0” for subnets.

''Q: Which IDs are changing?''

bundle
conversion-task
customer-gateway
dhcp-options
elastic-ip-allocation
elastic-ip-association
export-task
flow-log
image
import-task
internet-gateway
network-acl
network-acl-association
network-interface
network-interface-attachment
prefix-list
route-table
route-table-association
security-group
subnet
subnet-cidr-block-association
vpc
vpc-cidr-block-assocation
vpc-endpoint
vpc-peering-connection
vpn-connection
vpn-gateway
''Q: How does this impact me?''

There is a good chance that you won’t need to make any system changes to handle the new format. If you only use the console to manage AWS resources, you might not be impacted at all, but you should still update your settings to use the longer ID format as soon as possible. If you interact with AWS resources via APIs, SDKs, or the AWS CLI, you might be impacted, depending on whether your software makes assumptions about the ID format when validating or persisting resource IDs. If this is the case, you might need to update your systems to handle the new format.
Some failure modes could include:

If your systems use regular expressions to validate the ID format, you might error if a longer format is encountered.
If there are expectations about the ID length in your database schemas, you might be unable to store a longer ID.
''Q: Will this affect existing resources?''

No. Only resources that are created after you opt-in to the longer format will be affected. Once a resource has been assigned an ID (long or short), that ID will never change. Each ID is unique and will never be reused. Any resource created with the old ID format will always retain its shorter ID. Any resource created with the new format will retain its longer ID, even if you opt back out.

''Q: When will this happen?''

Through the end of June 2018, longer IDs will be available for opt-in via APIs and the EC2 Console. All accounts can opt-in and out of longer IDs as needed for testing. Starting on July 1, 2018, the option to switch formats will no longer be available, and newly created EC2 resources to receive longer IDs. All regions launching in July 2018 and onward will only support longer IDs.

''Q: Why is there an opt-in period?''

We want to give you as much time as possible to test your systems with the new format. This transition time offers maximum flexibility to test and update your systems incrementally and will help minimize interrupts as you add support for the new format, if necessary.

''Q: How do I opt in and out of receiving longer IDs?''

Throughout the transition period (Now through the end of June 2018), you can opt to receive longer or shorter IDs by using the APIs or the EC2 Console. Instructions are provided in this documentation.

''Q: What will happen if I take no action?''

If you do not opt-in to the new format during the transition period, you will be automatically begin receiving the longer format IDs after July 1, 2018. We do not recommend this approach. It is better to add support for the new format during the transition window, which offers the opportunity for controlled testing.

''Q: What if I prefer to keep receiving the shorter ID format after the end of June 2018?''

This is not possible regardless of your user settings specified.

''Q: When will the longer IDs’ final transition happen?''

In July 2018, your newly created resources will start to receive longer IDs. You can check the scheduled transition date for your each region by using the AWS CLI describe-id-format.

''Q: If I opt in to longer IDs and then opt back out during the transition period, what will happen to resources that were created with longer IDs?''

Once a resource has been assigned an ID it will not change, so resources that are created with longer IDs will retain the longer IDs regardless of later actions. If you opt in to the longer format, create resources, and then opt out, you will see a mix of long and short resource IDs, even after opting out. The only way to get rid of long IDs will be to delete or terminate the respective resources. For this reason, exercise caution and avoid creating critical resources with the new format until you have tested your tools and automation.

''Q: What should I do if my systems are not working as expected before the transition period ends?''

If your systems are not working as expected during the transition period, you can temporarily opt out of longer format IDs and remediate your systems, however your account will automatically be transitioned back to using longer IDs after the end of June 2018. Regardless of your account settings, all new resources will receive the longer format IDs, so it is important for you to test your systems with longer format IDs before the transition period ends. By testing and opting in earlier, you give yourself valuable time to make modifications to your resources with short resource IDs and you minimize the risk of any impact to your systems.

''Q: What will happen if I launch resources in multiple regions during the transition period?''

Your resources’ ID length will depend upon the region you launch your resources. If the region has already transitioned to using longer IDs, resources launched in that region will have longer format IDs; if not, they will have shorter resource IDs. Therefore, during the transition window, you may see a mix of shorter and longer resource IDs.

''Q: If AWS adds new regions during the transition period, will new regions support longer IDs?''

Yes. All new regions launching after July 2018 will issue longer format IDs by default for both new and existing accounts.

''Q: What will be the default ID type for new accounts?''

Accounts created on March 15, 2018 or later will be configured to receive the longer ID format by default in every AWS region except AWS GovCloud (US). If you are a new customer, this will make the transition to longer IDs really simple. If you would like your new account to assign the shorter ID format to your resources, then simply reconfigure your account for shorter IDs as described above. This workflow will be necessary until you are ready for your accounts to receive longer IDs.

''Q: Will I need to upgrade to a new version of the AWS SDKs or CLI?''

The following AWS CLI and SDKs are fully compatible with longer IDs: PHP v2.8.27+, PHP v3.15.0+, AWS CLI v1.10.2+, Boto3v1.2.1+, Botocorev1.3.24+, PHP v1, Boto v1, Boto v2, Ruby v1, Ruby v2, JavaScript, Java, .NET, AWS Tools for Windows PowerShell, and Go.

''Q: How can I test my systems with longer IDs?''

Amazon Machine Images (AMIs) with longer format IDs have been published for testing purposes. Instruction on how to access these AMIs are provided here.

!Overview
''Q: What is Amazon Elastic Compute Cloud (Amazon EC2)?''

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers.

''Q: What can I do with Amazon EC2?''

Just as Amazon Simple Storage Service (Amazon S3) enables storage in the cloud, Amazon EC2 enables “compute” in the cloud. Amazon EC2’s simple web service interface allows you to obtain and configure capacity with minimal friction. It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change. Amazon EC2 changes the economics of computing by allowing you to pay only for capacity that you actually use.

''Q: How can I get started with Amazon EC2?''

To sign up for Amazon EC2, click the “Sign up for This Web Service” button on the Amazon EC2 detail page. You must have an Amazon Web Services account to access this service; if you do not already have one, you will be prompted to create one when you begin the Amazon EC2 sign-up process. After signing up, please refer to the Amazon EC2 documentation, which includes our Getting Started Guide.

''Q: Why am I asked to verify my phone number when signing up for Amazon EC2?''

Amazon EC2 registration requires you to have a valid phone number and email address on file with AWS in case we ever need to contact you. Verifying your phone number takes only a couple of minutes and involves receiving a phone call during the registration process and entering a PIN number using the phone key pad.

''Q: What can developers now do that they could not before?''

Until now, small developers did not have the capital to acquire massive compute resources and ensure they had the capacity they needed to handle unexpected spikes in load. Amazon EC2 enables any developer to leverage Amazon’s own benefits of massive scale with no up-front investment or performance compromises. Developers are now free to innovate knowing that no matter how successful their businesses become, it will be inexpensive and simple to ensure they have the compute capacity they need to meet their business requirements.

The “Elastic” nature of the service allows developers to instantly scale to meet spikes in traffic or demand. When computing requirements unexpectedly change (up or down), Amazon EC2 can instantly respond, meaning that developers have the ability to control how many resources are in use at any given point in time. In contrast, traditional hosting services generally provide a fixed number of resources for a fixed amount of time, meaning that users have a limited ability to easily respond when their usage is rapidly changing, unpredictable, or is known to experience large peaks at various intervals.

''Q: How do I run systems in the Amazon EC2 environment?''

Once you have set up your account and select or create your AMIs, you are ready to boot your instance. You can start your AMI on any number of On-Demand instances by using the RunInstances API call. You simply need to indicate how many instances you wish to launch. If you wish to run more than 20 On-Demand instances, complete the Amazon EC2 instance request form.

If Amazon EC2 is able to fulfill your request, RunInstances will return success, and we will start launching your instances. You can check on the status of your instances using the DescribeInstances API call. You can also programmatically terminate any number of your instances using the TerminateInstances API call.

If you have a running instance using an Amazon EBS boot partition, you can also use the StopInstances API call to release the compute resources but preserve the data on the boot partition. You can use the StartInstances API when you are ready to restart the associated instance with the Amazon EBS boot partition.

In addition, you have the option to use Spot Instances to reduce your computing costs when you have flexibility in when your applications can run. Read more about Spot Instances for a more detailed explanation on how Spot Instances work.

If you prefer, you can also perform all these actions from the AWS Management Console or through the command line using our command line tools, which have been implemented with this web service API.

''Q: What is the difference between using the local instance store and Amazon Elastic Block Store (Amazon EBS) for the root device?''

When you launch your Amazon EC2 instances you have the ability to store your root device data on Amazon EBS or the local instance store. By using Amazon EBS, data on the root device will persist independently from the lifetime of the instance. This enables you to stop and restart the instance at a subsequent time, which is similar to shutting down your laptop and restarting it when you need it again.

Alternatively, the local instance store only persists during the life of the instance. This is an inexpensive way to launch instances where data is not stored to the root device. For example, some customers use this option to run large web sites where each instance is a clone to handle web traffic.

''Q: How quickly will systems be running?''

It typically takes less than 10 minutes from the issue of the RunInstances call to the point where all requested instances begin their boot sequences. This time depends on a number of factors including: the size of your AMI, the number of instances you are launching, and how recently you have launched that AMI. Images launched for the first time may take slightly longer to boot.

''Q: How do I load and store my systems with Amazon EC2?''

Amazon EC2 allows you to set up and configure everything about your instances from your operating system up to your applications. An Amazon Machine Image (AMI) is simply a packaged-up environment that includes all the necessary bits to set up and boot your instance. Your AMIs are your unit of deployment. You might have just one AMI or you might compose your system out of several building block AMIs (e.g., webservers, appservers, and databases). Amazon EC2 provides a number of tools to make creating an AMI easy. Once you create a custom AMI, you will need to bundle it. If you are bundling an image with a root device backed by Amazon EBS, you can simply use the bundle command in the AWS Management Console. If you are bundling an image with a boot partition on the instance store, then you will need to use the AMI Tools to upload it to Amazon S3. Amazon EC2 uses Amazon EBS and Amazon S3 to provide reliable, scalable storage of your AMIs so that we can boot them when you ask us to do so.

Or, if you want, you don’t have to set up your own AMI from scratch. You can choose from a number of globally available AMIs that provide useful instances. For example, if you just want a simple Linux server, you can choose one of the standard Linux distribution AMIs.

''Q: How do I access my systems?''

The RunInstances call that initiates execution of your application stack will return a set of DNS names, one for each system that is being booted. This name can be used to access the system exactly as you would if it were in your own data center. You own that machine while your operating system stack is executing on it.

''Q: Is Amazon EC2 used in conjunction with Amazon S3?''

Yes, Amazon EC2 is used jointly with Amazon S3 for instances with root devices backed by local instance storage. By using Amazon S3, developers have access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites. In order to execute systems in the Amazon EC2 environment, developers use the tools provided to load their AMIs into Amazon S3 and to move them between Amazon S3 and Amazon EC2. See How do I load and store my systems with Amazon EC2? for more information about AMIs.

We expect developers to find the combination of Amazon EC2 and Amazon S3 to be very useful. Amazon EC2 provides cheap, scalable compute in the cloud while Amazon S3 allows users to store their data reliably.

''Q: How many instances can I run in Amazon EC2?''

You are limited to running up to a total of 20 On-Demand instances across the instance family, purchasing 20 Reserved Instances, and requesting Spot Instances per your dynamic Spot limit per region. New AWS accounts may start with limits that are lower than the limits described here. Certain instance types are further limited per region as follows:
Instance Type	On-Demand Limit	Reserved Limit	Spot Limit
m5.large	20	20	Dynamic Spot Limit
m5.xlarge	20	20	Dynamic Spot Limit
m5.2xlarge	20	20	Dyanmic Spot Limit
m5.4xlarge	10	20	Dynamic Spot Limit
m5.12xlarge	5	20	Dynamic Spot Limit
m5.24xlarge	5	20	Dynamic Spot Limit
m4.4xlarge	10	20	Dynamic Spot Limit
m4.10xlarge	5	20	Dynamic Spot Limit
m4.16xlarge	5	20	Dynamic Spot Limit
c5.large	20	20	Dynamic Spot Limit
c5.xlarge	20	20	Dynamic Spot Limit
c5.2xlarge	20	20	Dynamic Spot Limit
c5.4xlarge	10	20	Dynamic Spot Limit
c5.9xlarge	5	20	Dynamic Spot Limit
c5.18xlarge	5	20	Dynamic Spot Limit
c4.4xlarge	10	20	Dynamic Spot Limit
c4.8xlarge	5	20	Dynamic Spot Limit
hs1.8xlarge	2	20	Not offered
cr1.8xlarge	2	20	Dynamic Spot Limit
p3.2xlarge	1	20	Dynamic Spot Limit
p3.8xlarge	1	20	Dynamic Spot Limit
p3.16xlarge	1	20	Dynamic Spot Limit
p2.xlarge	1	20	Dynamic Spot Limit
p2.8xlarge	1	20	Dynamic Spot Limit
p2.16xlarge	1	20	Dynamic Spot Limit
g3.4xlarge	1	20	Dynamic Spot Limit
g3.8xlarge	1	20	Dynamic Spot Limit
g3.16xlarge	1	20	Dynamic Spot Limit
r4.large	20	20	Dynamic Spot Limit
r4.xlarge	20	20	Dynamic Spot Limit
r4.2xlarge	20	20	Dynamic Spot Limit
r4.4xlarge	10	20	Dynamic Spot Limit
r4.8xlarge	5	20	Dynamic Spot Limit
r4.16xlarge	1	20	Dynamic Spot Limit
r3.4xlarge	10	20	Dynamic Spot Limit
r3.8xlarge	5	20	Dynamic Spot Limit
h1.8xlarge	10	20	Dynamic Spot Limit
h1.16xlarge	5	20	Dynamic Spot Limit
i3.large	2	20	Dynamic Spot limit
i3.xlarge	2	20	Dynamic Spot limit
i3.2xlarge	2	20	Dynamic Spot limit
i3.4xlarge	2	20	Dynamic Spot limit
i3.8xlarge	2	20	Dynamic Spot limit
i3.8xlarge	2	20	Dynamic Spot limit
i3.16xlarge	2	20	Dynamic Spot limit
i2.2xlarge	8	20	Dynamic Spot Limit
i2.4xlarge	4	20	Dynamic Spot Limit
i2.8xlarge	2	20	Dynamic Spot Limit
d2.4xlarge	10	20	Dynamic Spot Limit
d2.8xlarge	5	20	Dynamic Spot Limit
t2.nano	20	20	Dynamic Spot Limit
t2.micro	20	20	Dynamic Spot Limit
t2.small	 20	20	Dynamic Spot Limit
t2.medium	 20	20	Dynamic Spot Limit
t2.large	 20	20	Dynamic Spot Limit
t2.xlarge	 20	20	Dynamic Spot Limit
t2.2xlarge	20	20	Dynamic Spot Limit
All Other Instance Types	20	20	Dynamic Spot Limit
Note that cc2.8xlarge, hs1.8xlarge, cr1.8xlarge, G2, D2, and I2 instances are not available in all regions.

If you need more instances, complete the Amazon EC2 instance request form with your use case and your instance increase will be considered. Limit increases are tied to the region they were requested for.

''Q: Are there any limitations in sending email from Amazon EC2 instances?''

Yes. In order to maintain the quality of Amazon EC2 addresses for sending email, we enforce default limits on the amount of email that can be sent from EC2 accounts. If you wish to send larger amounts of email from EC2, you can apply to have these limits removed from your account by filling out this form.

''Q: How quickly can I scale my capacity both up and down?''

Amazon EC2 provides a truly elastic computing environment. Amazon EC2 enables you to increase or decrease capacity within minutes, not hours or days. You can commission one, hundreds or even thousands of server instances simultaneously. When you need more instances, you simply call RunInstances, and Amazon EC2 will typically set up your new instances in a matter of minutes. Of course, because this is all controlled with web service APIs, your application can automatically scale itself up and down depending on its needs.

''Q: What operating system environments are supported?''

Amazon EC2 currently supports a variety of operating systems including: Amazon Linux, Ubuntu, Windows Server, Red Hat Enterprise Linux, SUSE Linux Enterprise Server, Fedora, Debian, CentOS, Gentoo Linux, Oracle Linux, and FreeBSD. We are looking for ways to expand it to other platforms.

''Q: Does Amazon EC2 use ECC memory?''

In our experience, ECC memory is necessary for server infrastructure, and all the hardware underlying Amazon EC2 uses ECC memory.

''Q: How is this service different than a plain hosting service?''

Traditional hosting services generally provide a pre-configured resource for a fixed amount of time and at a predetermined cost. Amazon EC2 differs fundamentally in the flexibility, control and significant cost savings it offers developers, allowing them to treat Amazon EC2 as their own personal data center with the benefit of Amazon.com’s robust infrastructure.

When computing requirements unexpectedly change (up or down), Amazon EC2 can instantly respond, meaning that developers have the ability to control how many resources are in use at any given point in time. In contrast, traditional hosting services generally provide a fixed number of resources for a fixed amount of time, meaning that users have a limited ability to easily respond when their usage is rapidly changing, unpredictable, or is known to experience large peaks at various intervals.

Secondly, many hosting services don’t provide full control over the compute resources being provided. Using Amazon EC2, developers can choose not only to initiate or shut down instances at any time, they can completely customize the configuration of their instances to suit their needs – and change it at any time. Most hosting services cater more towards groups of users with similar system requirements, and so offer limited ability to change these.

Finally, with Amazon EC2 developers enjoy the benefit of paying only for their actual resource consumption – and at very low rates. Most hosting services require users to pay a fixed, up-front fee irrespective of their actual computing power used, and so users risk overbuying resources to compensate for the inability to quickly scale up resources within a short time frame. 

!Service level agreement (SLA)
''Q. What does your Amazon EC2 Service Level Agreement guarantee?''

Our SLA guarantees a Monthly Uptime Percentage of at least 99.99% for Amazon EC2 and Amazon EBS within a Region.

''Q. How do I know if I qualify for a SLA Service Credit?''

You are eligible for a SLA credit for either Amazon EC2 or Amazon EBS (whichever was Unavailable, or both if both were Unavailable) if the Region that you are operating in has an Monthly Uptime Percentage of less than 99.95% during any monthly billing cycle. For full details on all of the terms and conditions of the SLA, as well as details on how to submit a claim, please see http://aws.amazon.com/ec2/sla/

====
Instance types
Accelerated Computing instances | Compute Optimized instances | General Purpose instances | High Memory instances | Memory Optimized instances | Previous Generation instances | Storage Optimized instances

Accelerated Computing instances
''Q: What are Accelerated Computing instances?''

Accelerated Computing instance family is a family of instances which use hardware accelerators, or co-processors, to perform some functions, such as floating-point number calculation and graphics processing, more efficiently than is possible in software running on CPUs. Amazon EC2 provides three types of Accelerated Computing instances – GPU compute instances for general-purpose computing, GPU graphics instances for graphics intensive applications, and FPGA programmable hardware compute instances for advanced scientific workloads.

''Q. When should I use GPU Graphics and Compute instances?''

GPU instances work best for applications with massive parallelism such as workloads using thousands of threads. Graphics processing is an example with huge computational requirements, where each of the tasks is relatively small, the set of operations performed form a pipeline, and the throughput of this pipeline is more important than the latency of the individual operations. To be able build applications that exploit this level of parallelism, one needs GPU device specific knowledge by understanding how to program against various graphics APIs (DirectX, OpenGL) or GPU compute programming models (CUDA, OpenCL).

''Q: How are P3 instances different from G3 instances?''

P3 instances are the next-generation of EC2 general-purpose GPU computing instances, powered by up to 8 of the latest-generation NVIDIA Tesla V100 GPUs. These new instances significantly improve performance and scalability, and add many new features, including new Streaming Multiprocessor (SM) architecture for machine learning (ML)/deep learning (DL) performance optimization, second-generation NVIDIA NVLink high-speed GPU interconnect, and highly tuned HBM2 memory for higher-efficiency.

G3 instances use NVIDIA Tesla M60 GPUs and provide a high-performance platform for graphics applications using DirectX or OpenGL. NVIDIA Tesla M60 GPUs support NVIDIA GRID Virtual Workstation features, and H.265 (HEVC) hardware encoding. Each M60 GPU in G3 instances supports 4 monitors with resolutions up to 4096x2160, and is licensed to use NVIDIA GRID Virtual Workstation for one Concurrent Connected User. Example applications of G3 instances include 3D visualizations, graphics-intensive remote workstation, 3D rendering, application streaming, video encoding, and other server-side graphics workloads.

''Q: What are the benefits of NVIDIA Volta GV100 GPUs?''

The new NVIDIA Tesla V100 accelerator incorporates the powerful new Volta GV100 GPU. GV100 not only builds upon the advances of its predecessor, the Pascal GP100 GPU, it significantly improves performance and scalability, and adds many new features that improve programmability. These advances will supercharge HPC, data center, supercomputer, and deep learning systems and applications.

''Q: Who will benefit from P3 instances?''

P3 instances with their high computational performance will benefit users in artificial intelligence (AI), machine learning (ML), deep learning (DL) and high performance computing (HPC) applications. Users includes data scientists, data architects, data analysts, scientific researchers, ML engineers, IT managers and software developers. Key industries include transportation, energy/oil & gas, financial services (banking, insurance), healthcare, pharmaceutical, sciences, IT, retail, manufacturing, high-tech, transportation, government, academia, among many others.

''Q: What are some key use cases of P3 instances?''

P3 instance use GPUs to accelerate numerous deep learning systems and applications including autonomous vehicle platforms, speech, image, and text recognition systems, intelligent video analytics, molecular simulations, drug discovery, disease diagnosis, weather forecasting, big data analytics, financial modeling, robotics, factory automation, real-time language translation, online search optimizations, and personalized user recommendations, to name just a few.

''Q: Why should customers use GPU-powered Amazon P3 instances for AI/ML and HPC?''

GPU-based compute instances provide greater throughput and performance because they are designed for massively parallel processing using thousands of specialized cores per GPU, versus CPUs offering sequential processing with a few cores. In addition, developers have built hundreds of GPU-optimized scientific HPC applications such as quantum chemistry, molecular dynamics, meteorology, among many others. Research indicates that over 70% of the most popular HPC applications provide built-in support for GPUs.

''Q: Will P3 instances support EC2 Classic networking and Amazon VPC?''

P3 instances will support VPC only.

''Q. How are G3 instances different from P2 instances?''

G3 instances use NVIDIA Tesla M60 GPUs and provide a high-performance platform for graphics applications using DirectX or OpenGL. NVIDIA Tesla M60 GPUs support NVIDIA GRID Virtual Workstation features, and H.265 (HEVC) hardware encoding. Each M60 GPU in G3 instances supports 4 monitors with resolutions up to 4096x2160, and is licensed to use NVIDIA GRID Virtual Workstation for one Concurrent Connected User. Example applications of G3 instances include 3D visualizations, graphics-intensive remote workstation, 3D rendering, application streaming, video encoding, and other server-side graphics workloads.

P2 instances use NVIDIA Tesla K80 GPUs and are designed for general purpose GPU computing using the CUDA or OpenCL programming models. P2 instances provide customers with high bandwidth 25 Gbps networking, powerful single and double precision floating-point capabilities, and error-correcting code (ECC) memory, making them ideal for deep learning, high performance databases, computational fluid dynamics, computational finance, seismic analysis, molecular modeling, genomics, rendering, and other server-side GPU compute workloads.

''Q: How are P3 instances different from G2 instances?''

P3 Instances are the next-generation of EC2 general-purpose GPU computing instances, powered by up to 8 of the latest-generation NVIDIA Volta GV100 GPUs. These new instances significantly improve performance and scalability and add many new features, including new Streaming Multiprocessor (SM) architecture, optimized for machine learning (ML)/deep learning (DL) performance, second-generation NVIDIA NVLink high-speed GPU interconnect, and highly tuned HBM2 memory for higher-efficiency.

P2 instances use NVIDIA Tesla K80 GPUs and are designed for general purpose GPU computing using the CUDA or OpenCL programming models. P2 instances provide customers with high bandwidth 25 Gbps networking, powerful single and double precision floating-point capabilities, and error-correcting code (ECC) memory.

''Q. What APIs and programming models are supported by GPU Graphics and Compute instances?''

P3 instances support CUDA 9 and OpenCL, P2 instances support CUDA 8 and OpenCL 1.2 and G3 instances support DirectX 12, OpenGL 4.5, CUDA 8, and OpenCL 1.2.

''Q. Where do I get NVIDIA drivers for P3 and G3 instances?''

There are two methods by which NVIDIA drivers may be obtained. There are listings on the AWS Marketplace which offer Amazon Linux AMIs and Windows Server AMIs with the NVIDIA drivers pre-installed. You may also launch 64-bit, HVM AMIs and install the drivers yourself. You must visit the NVIDIA driver website and search for the NVIDIA Tesla V100 for P3, NVIDIA Tesla K80 for P2, and NVIDIA Tesla M60 for G3 instances.

''Q. Which AMIs can I use with P3, P2 and G3 instances?''

You can currently use Windows Server, SUSE Enterprise Linux, Ubuntu, and Amazon Linux AMIs on P2 and G3 instances. P3 instances only support HVM AMIs. If you want to launch AMIs with operating systems not listed here, contact AWS Customer Support with your request or reach out through EC2 Forums.

''Q. Does the use of G2 and G3 instances require third-party licenses?''

Aside from the NVIDIA drivers and GRID SDK, the use of G2 and G3 instances does not necessarily require any third-party licenses. However, you are responsible for determining whether your content or technology used on G2 and G3 instances requires any additional licensing. For example, if you are streaming content you may need licenses for some or all of that content. If you are using third-party technology such as operating systems, audio and/or video encoders, and decoders from Microsoft, Thomson, Fraunhofer IIS, Sisvel S.p.A., MPEG-LA, and Coding Technologies, please consult these providers to determine if a license is required. For example, if you leverage the on-board h.264 video encoder on the NVIDIA GRID GPU you should reach out to MPEG-LA for guidance, and if you use mp3 technology you should contact Thomson for guidance.

''Q. Why am I not getting NVIDIA GRID features on G3 instances using the driver downloaded from NVIDIA website?''

The NVIDIA Tesla M60 GPU used in G3 instances requires a special NVIDIA GRID driver to enable all advanced graphics features, and 4 monitors support with resolution up to 4096x2160. You need to use an AMI with NVIDIA GRID driver pre-installed, or download and install the NVIDIA GRID driver following the AWS documentation.

''Q. Why am I unable to see the GPU when using Microsoft Remote Desktop?''

When using Remote Desktop, GPUs using the WDDM driver model are replaced with a non-accelerated Remote Desktop display driver. In order to access your GPU hardware, you need to utilize a different remote access tool, such as VNC.

''Q. What is Amazon EC2 F1?''

Amazon EC2 F1 is a compute instance with programmable hardware you can use for application acceleration. The new F1 instance type provides a high performance, easy to access FPGA for developing and deploying custom hardware accelerations.

''Q. What are FPGAs and why do I need them?''

FPGAs are programmable integrated circuits that you can configure using software. By using FPGAs you can accelerate your applications up to 30x when compared with servers that use CPUs alone. And, FPGAs are reprogrammable, so you get the flexibility to update and optimize your hardware acceleration without having to redesign the hardware.

''Q. How does F1 compare with traditional FPGA solutions?''

F1 is an AWS instance with programmable hardware for application acceleration. With F1, you have access to FPGA hardware in a few simple clicks, reducing the time and cost of full-cycle FPGA development and scale deployment from months or years to days. While FPGA technology has been available for decades, adoption of application acceleration has struggled to be successful in both the development of accelerators and the business model of selling custom hardware for traditional enterprises, due to time and cost in development infrastructure, hardware design, and at-scale deployment. With this offering, customers avoid the undifferentiated heavy lifting associated with developing FPGAs in on-premises data centers.

''Q: What is an Amazon FPGA Image (AFI)?''

The design that you create to program your FPGA is called an Amazon FPGA Image (AFI). AWS provides a service to register, manage, copy, query, and delete AFIs. After an AFI is created, it can be loaded on a running F1 instance. You can load multiple AFIs to the same F1 instance, and can switch between AFIs in runtime without reboot. This lets you quickly test and run multiple hardware accelerations in rapid sequence. You can also offer to other customers on the AWS Marketplace a combination of your FPGA acceleration and an AMI with custom software or AFI drivers.

''Q. How do I list my hardware acceleration on the AWS Marketplace?''

You would develop your AFI and the software drivers/tools to use this AFI. You would then package these software tools/drivers into an Amazon Machine Image (AMI) in an encrypted format. AWS manages all AFIs in the encrypted format you provide to maintain the security of your code. To sell a product in the AWS Marketplace, you or your company must sign up to be an AWS Marketplace reseller, you would then submit your AMI ID and the AFI ID(s) intended to be packaged in a single product. AWS Marketplace will take care of cloning the AMI and AFI(s) to create a product, and associate a product code to these artifacts, such that any end-user subscribing to this product code would have access to this AMI and the AFI(s).

''Q. What is available with F1 instances?''

For developers, AWS is providing a Hardware Development Kit (HDK) to help accelerate development cycles, a FPGA Developer AMI for development in the cloud, an SDK for AMIs running the F1 instance, and a set of APIs to register, manage, copy, query, and delete AFIs. Both developers and customers have access to the AWS Marketplace where AFIs can be listed and purchased for use in application accelerations.

''Q. Do I need to be a FPGA expert to use an F1 instance?''

AWS customers subscribing to a F1-optimized AMI from AWS Marketplace do not need to know anything about FPGAs to take advantage of the accelerations provided by the F1 instance and the AWS Marketplace. Simply subscribe to an F1-optimized AMI from the AWS Marketplace with an acceleration that matches the workload. The AMI contains all the software necessary for using the FPGA acceleration. Customers need only write software to the specific API for that accelerator and start using the accelerator.

''Q. I’m a FPGA developer, how do I get started with F1 instances?''

Developers can get started on the F1 instance by creating an AWS account and downloading the AWS Hardware Development Kit (HDK). The HDK includes documentation on F1, internal FPGA interfaces, and compiler scripts for generating AFI. Developers can start writing their FPGA code to the documented interfaces included in the HDK to create their acceleration function. Developers can launch AWS instances with the FPGA Developer AMI. This AMI includes the development tools needed to compile and simulate the FPGA code. The Developer AMI is best run on the latest C5, M5, or R4 instances. Developers should have experience in the programming languages used for creating FPGA code (i.e. Verilog or VHDL) and an understanding of the operation they wish to accelerate.

''Q. I’m not an FPGA developer, how do I get started with F1 instances?''

Customers can get started with F1 instances by selecting an accelerator from the AWS Marketplace, provided by AWS Marketplace sellers, and launching an F1 instance with that AMI. The AMI includes all of the software and APIs for that accelerator. AWS manages programming the FPGA with the AFI for that accelerator. Customers do not need any FPGA experience or knowledge to use these accelerators. They can work completely at the software API level for that accelerator.

''Q. Does AWS provide a developer kit?''

Yes. The Hardware Development Kit (HDK) includes simulation tools and simulation models for developers to simulate, debug, build, and register their acceleration code. The HDK includes code samples, compile scripts, debug interfaces, and many other tools you will need to develop the FPGA code for your F1 instances. You can use the HDK either in an AWS provided AMI, or in your on-premises development environment. These models and scripts are available publically with an AWS account.

''Q. Can I use the HDK in my on-premises development environment?''

Yes. You can use the Hardware Development Kit HDK either in an AWS-provided AMI, or in your on-premises development environment.

''Q. Can I add an FPGA to any EC2 instance type?''

No. F1 instances comes in two instance sizes f1.2xlarge and f1.16 xlarge. 
!Compute Optimized instances
''Q. When should I use Compute Optimized instances?''

Compute Optimized instances are designed for applications that benefit from high compute power. These applications include compute-intensive applications like high-performance web servers, high-performance computing (HPC), scientific modelling, distributed analytics and machine learning inference.

''Q. Can I launch C4 instances as Amazon EBS-optimized instances?''

Each C4 instance type is EBS-optimized by default. C4 instances 500 Mbps to 4,000 Mbps to EBS above and beyond the general-purpose network throughput provided to the instance. Since this feature is always enabled on C4 instances, launching a C4 instance explicitly as EBS-optimized will not affect the instance's behavior.

''Q. How can I use the processor state control feature available on the c4.8xlarge instance?''

The c4.8xlarge instance type provides the ability for an operating system to control processor C-states and P-states. This feature is currently available only on Linux instances. You may want to change C-state or P-state settings to increase processor performance consistency, reduce latency, or tune your instance for a specific workload. By default, Amazon Linux provides the highest-performance configuration that is optimal for most customer workloads; however, if your application would benefit from lower latency at the cost of higher single- or dual-core frequencies, or from lower-frequency sustained performance as opposed to bursty Turbo Boost frequencies, then you should consider experimenting with the C-state or P-state configuration options that are available to these instances. For additional information on this feature, see the Amazon EC2 User Guide section on Processor State Control.

''Q. Which instances are available within Compute Optimized instances category?''

C5 instances: C5 instances are the latest generation of EC2 Compute Optimized instances. C5 instances are based on Intel Xeon Platinum processors, part of the Intel Xeon Scalable (codenamed Skylake-SP) processor family, and are available in 6 sizes and offer up to 72 vCPUs and 144 GiB memory. C5 instances deliver 25% improvement in price/performance compared to C4 instances.

C4 instances: C4 instances are based on Intel Xeon E5-2666 v3 (codenamed Haswell) processors. C4 instances are available in 5 sizes and offer up to 36 vCPUs and 60 GiB memory.

''Q. Should I move my workloads from C3 or C4 instances to C5 instances?''

The generational improvement in CPU performance and lower price of C5 instances, which combined result in a 25% price/performance improvement relative to C4 instances, benefit a broad spectrum of workloads that currently run on C3 or C4 instances. For floating point intensive applications, Intel AVX-512 enables significant improvements in delivered TFLOPS by effectively extracting data level parallelism. Customers looking for absolute performance for graphics rendering and HPC workloads that can be accelerated with GPUs or FPGAs should also evaluate other instance families in the Amazon EC2 portfolio that include those resources to find the ideal instance for their workload.

''Q. Which operating systems/AMIs are supported on C5 Instances?''

EBS backed HVM AMIs with support for ENA networking and booting from NVMe-based storage can be used with C5 instances. The following AMIs are supported on C5:

Amazon Linux 2014.03 or newer
Ubuntu 14.04 or newer
SUSE Linux Enterprise Server 12 or newer
Red Hat Enterprise Linux 7.4 or newer
CentOS 7 or newer
Windows Server 2008 R2
Windows Server 2012
Windows Server 2012 R2
Windows Server 2016
FreeBSD 11.1-RELEASE
For optimal local NVMe-based SSD storage performance on C5d, Linux kernel version 4.9+ is recommended.

''Q. What are the storage options available to C5 customers?''

C5 instances use EBS volumes for storage, are EBS-optimized by default, and offer up to 9 Gbps throughput to both encrypted and unencrypted EBS volumes. C5 instances access EBS volumes via PCI attached NVM Express (NVMe) interfaces. NVMe is an efficient and scalable storage interface commonly used for flash based SSDs such as local NVMe storage provided with I3 and I3en instances. Though the NVMe interface may provide lower latency compared to Xen paravirtualized block devices, when used to access EBS volumes the volume type, size, and provisioned IOPS (if applicable) will determine the overall latency and throughput characteristics of the volume. When NVMe is used to provide EBS volumes, they are attached and detached by PCI hotplug.

''Q. What network interface is supported on C5 instances?''

C5 instances use the Elastic Network Adapter (ENA) for networking and enable Enhanced Networking by default. With ENA, C5 instances can utilize up to 25 Gbps of network bandwidth.

''Q. Which storage interface is supported on C5 instances?''

C5 instances will support only NVMe EBS device model. EBS volumes attached to C5 instances will appear as NVMe devices. NVMe is a modern storage interface that provides latency reduction and results in increased disk I/O and throughput.

''Q. How many EBS volumes can be attached to C5 instances?''

C5 instances support a maximum for 27 EBS volumes for all Operating systems. The limit is shared with ENI attachments which can be found here http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html. For example: since every instance has at least 1 ENI, if you have 3 additional ENI attachments on the c4.2xlarge, you can attach 24 EBS volumes to that instance.

''Q. What is the underlying hypervisor on C5 instances?''

C5 instances use a new EC2 hypervisor that is based on core KVM technology.

''Q: Why does the total memory reported by the operating system not match the advertised memory of the C5 instance type?''

In C5, portions of the total memory for an instance are reserved from use by the Operating System including areas used by the virtual BIOS for things like ACPI tables and for devices like the virtual video RAM.

!General Purpose instances
''Q: What are Amazon EC2 A1 instances?''

Amazon EC2 A1 instances are new general purpose instances powered by the AWS Graviton Processors that are custom designed by AWS.
''Q: What are the specifications of the new AWS Graviton Processors?''

AWS Graviton processors are a new line of processors that are custom designed by AWS utilizing Amazon’s extensive expertise in building platform solutions for cloud applications running at scale. These processors are based on the 64-bit Arm instruction set and feature Arm Neoverse cores as well as custom silicon designed by AWS. The cores operate at a frequency of 2.3 GHz.

''Q: When should I use A1 instances?''

A1 instances deliver significant cost savings for customer workloads that are supported by the extensive Arm ecosystem and can fit within the available memory footprint. A1 instances are ideal for scale-out applications such as web servers, containerized microservices, caching fleets, and distributed data stores. These instances will also appeal to developers, enthusiasts, and educators across the Arm developer community. Most applications that make use of open source software like Apache HTTP Server, Perl, PHP, Ruby, Python, NodeJS, and Java easily run on multiple processor architectures due to the support of Linux based operating systems. We encourage customers running such applications to give A1 instances a try.

Applications that require higher compute and network performance, require higher memory, or have dependencies on x86 architecture will be better suited for existing instances like the M5, C5, or R5 instances. Applications with variable CPU usage that experience occasional spikes in demand will get the most cost savings from the burstable performance T3 instances.

''Q: Will customers have to modify applications and workloads to be able to run on the A1 instances?''

The changes required are dependent on the application. Applications based on interpreted or run-time compiled languages (e.g. Python, Java, PHP, Node.js) should run without modifications. Other applications may need to be recompiled and those that don't rely on x86 instructions will generally build with minimal to no changes.
''Q: Which operating systems/AMIs are supported on A1 Instances?''

The following AMIs are supported on A1 instances: Amazon Linux 2, Ubuntu 16.04.4 or newer, Red Hat Enterprise Linux (RHEL) 7.6 or newer, SUSE Linux Enterprise Server 15 or newer. Additional AMI support for Fedora, Debian, NGINX Plus are also available through community AMIs and the AWS Marketplace. . EBS backed HVM AMIs launched on A1 instances require NVMe and ENA drivers installed at instance launch.

''Q: Are there specific AMI requirements to run on A1 instances?''

You will want to ensure that you use the “arm64” AMIs with the A1 instances. x86 AMIs are not compatible with A1 instances.

''Q: What are the various storage options available to A1 customers?''

A1 instances are EBS-optimized by default and offer up to 3,500 Mbps of dedicated EBS bandwidth to both encrypted and unencrypted EBS volumes. A1 instances only support Non-Volatile Memory Express (NVMe) interface to access EBS storage volumes. A1 instances will not support the blkfront interface.

''Q: Which network interface is supported on A1 instances?''

A1 instances support ENA based Enhanced Networking. With ENA, A1 instances can deliver up to 10 Gbps of network bandwidth between instances when launched within a Placement Group.

''Q: Do A1 instances support the AWS Nitro System?''

Yes, A1 instances are powered by the AWS Nitro System, a combination of dedicated hardware and Nitro hypervisor.

''Q: Why does the total memory reported by Linux not match the advertised memory of the A1 instance type?''
In A1 instances, portions of the total memory for an instance are reserved from use by the operating system including areas used by the virtual UEFI for things like ACPI tables.

''Q: What are the key use cases for Amazon EC2 M5 Instances?''

M5 instances offer a good choice for running development and test environments, web, mobile and gaming applications, analytics applications, and business critical applications including ERP, HR, CRM, and collaboration apps. Customers who are interested in running their data intensive workloads (e.g. HPC, or SOLR clusters) on instances with a higher memory footprint will also find M5 to be a good fit. Workloads that heavily use single and double precision floating point operations and vector processing such as video processing workloads and need higher memory can benefit substantially from the AVX-512 instructions that M5 supports.

''Q: Why should customers choose EC2 M5 Instances over EC2 M4 Instances?''

Compared with EC2 M4 Instances, the new EC2 M5 Instances deliver customers greater compute and storage performance, larger instance sizes for less cost, consistency and security. The biggest benefit of EC2 M5 Instances is based on its usage of the latest generation of Intel Xeon Scalable processors (aka Skylake), which deliver up to 20% improvement in price/performance compared to M4. With AVX-512 support in M5 vs. the older AVX2 in M4, customers will gain 2x higher performance in workloads requiring floating point operations. M5 instances offer up to 25 Gbps of network bandwidth and up to 10 Gbps of dedicated bandwidth to Amazon EBS. M5 instances also feature significantly higher networking and Amazon EBS performance on smaller instance sizes with EBS burst capability.

''Q: How does support for Intel AVX-512 benefit EC2 M5 and M5d Instance customers?''

Intel Advanced Vector Extension 512 (AVX-512) is a set of new CPU instructions available on the latest Intel Xeon Scalable processor family, that can accelerate performance for workloads and usages such as scientific simulations, financial analytics, artificial intelligence, machine learning/deep learning, 3D modeling and analysis, image and video processing, cryptography and data compression, among others. Intel AVX-512 offers exceptional processing of encryption algorithms, helping to reduce the performance overhead for cryptography, which means EC2 M5 and M5d customers can deploy more secure data and services into distributed environments without compromising performance.

''Q: What are the various processor options available to M5 customers?''

The M5 and M5d instance types use a 3.1 GHz Intel Xeon Platinum 8000 series processor. The M5a and M5ad instance types use a 2.5 GHz AMD EPYC 7000 series processor.

''Q: What are the various storage options available to M5 customers?''

The M5 and M5a instance types leverage EBS volumes for storage. The M5d and M5ad instance types support up to 3.6TB (4 x 900GB) of local NVMe storage.

''Q: When should I use the different M5 instance types?''

Customers should consider using the M5a and M5ad instance types if they are looking to save money on price when their workloads do not fully utilize the compute resources of their chosen instance, resulting in them paying for performance that they don’t actually need. For workloads that require the highest processor performance or high floating-point performance capabilities, including vectorized computing with AVX-512 instructions, then we suggest you use the M5 or M5d instance types.

''Q: Which network interface is supported on M5 instances?''

M5, M5a, M5d, and M5ad instances support only ENA based Enhanced Networking and will not support netback. With ENA, M5 and M5d instances can deliver up to 25 Gbps of network bandwidth between instances and the M5a and M5ad instance types can support up to 20Gbps of network bandwidth between instances.

''Q. Which operating systems/AMIs are supported on M5 Instances?''

EBS backed HVM AMIs with support for ENA networking and booting from NVMe-based storage can be used with M5 instances. The following AMIs are supported on M5, M5a, M5ad, and M5d:

Amazon Linux 2014.03 or newer
Ubuntu 14.04 or newer
SUSE Linux Enterprise Server 12 or newer
Red Hat Enterprise Linux 7.4 or newer
CentOS 7 or newer
Windows Server 2008 R2
Windows Server 2012
Windows Server 2012 R2
Windows Server 2016
FreeBSD 11.1-RELEASE
For optimal local NVMe-based SSD storage performance on M5d, Linux kernel version 4.9+ is recommended.

''Q. What interface connects EBS storage to my M5 insatnces?''
 
M5, M5a, M5ad, and M5d instances use EBS volumes for storage, are EBS-optimized by default, and offer up to 10 Gbps throughput to both encrypted and unencrypted EBS volumes. M5 instances access EBS volumes via PCI attached NVM Express (NVMe) interfaces. NVMe is an efficient and scalable storage interface commonly used for flash based SSDs such as local NVMe storage provided with I3 and I3en instances. Though the NVMe interface may provide lower latency compared to Xen paravirtualized block devices, when used to access EBS volumes the volume type, size, and provisioned IOPS (if applicable) will determine the overall latency and throughput characteristics of the volume. When NVMe is used to provide EBS volumes, they are attached and detached by PCI hotplug.
 
''Q. How many EBS volumes can be attached to M5 instances?''
 
M5 and M5a instances support a maximum for 27 EBS volumes for all Operating systems. The limit is shared with ENI attachments which can be found here http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html. For example: since every instance has at least 1 ENI, if you have 3 additional ENI attachments on the m5.2xlarge, you can attach 24 EBS volumes to that instance.
 
''Q. What is the underlying hypervisor on M5 instances?''
 
M5, M5a, M5ad, and M5d instances use a new lightweight Nitro Hypervisor that is based on core KVM technology.
 
''Q: Why does the total memory reported by the operating system not match the advertised memory of the M5 instance type?''
 
In M5, M5a, M5ad, and M5d, portions of the total memory for an instance are reserved from use by the operating system including areas used by the virtual BIOS for things like ACPI tables and for devices like the virtual video RAM.
 
''Q: How are Burstable Performance Instances different?''
 
Amazon EC2 allows you to choose between Fixed Performance Instances (e.g. C, M and R instance families) and Burstable Performance Instances (e.g. T2). Burstable Performance Instances provide a baseline level of CPU performance with the ability to burst above the baseline.
 
T2 instances’ baseline performance and ability to burst are governed by CPU Credits. Each T2 instance receives CPU Credits continuously, the rate of which depends on the instance size. T2 instances accrue CPU Credits when they are idle, and consume CPU credits when they are active. A CPU Credit provides the performance of a full CPU core for one minute.
Model

vCPUs

CPU Credits / hour

Maximum CPU Credit Balance

Baseline CPU Performance

t2.nano	1	3	72	5% of a core
t2.micro

1

6

144

10% of a core

t2.small

1

12

288

20% of a core

t2.medium

2

24

576

40% of a core*

t2.large	2	36	864	60% of a core**
t2.xlarge

4

54

1,296

90% of a core***

t2.2xlarge

8

81

1,944

135% of a core****

* For the t2.medium, single threaded applications can use 40% of 1 core, or if needed, multithreaded applications can use 20% each of 2 cores.

**For the t2.large, single threaded applications can use 60% of 1 core, or if needed, multithreaded applications can use 30% each of 2 cores.

*** For the t2.xlarge, single threaded applications can use 90% of 1 core, or if needed, multithreaded applications can use 45% each of 2 cores or 22.5% of all 4 cores.

**** For the t2.large, single threaded applications can use all of 1 core, or if needed, multithreaded applications can use 67.5% each of 2 cores or 16.875% of all 8 cores.

''Q. How do I choose the right Amazon Machine Image (AMI) for my T2 instances?''

You will want to verify that the minimum memory requirements of your operating system and applications are within the memory allocated for each T2 instance size (e.g. 512 MiB for t2.nano). Operating systems with Graphical User Interfaces (GUI) that consume significant memory and CPU, for example Microsoft Windows, might need a t2.micro or larger instance size for many use cases. You can find AMIs suitable for the t2.nano instance types on AWS Marketplace. Windows customers who do not need the GUI can use the Microsoft Windows Server 2012 R2 Core AMI.

''Q: When should I choose a Burstable Performance Instance, such as T2?''

T2 instances provide a cost-effective platform for a broad range of general purpose production workloads. T2 Unlimited instances can sustain high CPU performance for as long as required. If your workloads consistently require CPU usage much higher than the baseline, consider a dedicated CPU instance family such as the M or C.

''Q: How can I see the CPU Credit balance for each T2 instance?''

You can see the CPU Credit balance for each T2 instance in EC2 per-Instance metrics in Amazon CloudWatch. T2 instances have four metrics, CPUCreditUsage, CPUCreditBalance, CPUSurplusCreditBalance and CPUSurplusCreditsCharged. CPUCreditUsage indicates the amount of CPU Credits used. CPUCreditBalance indicates the balance of CPU Credits. CPUSurplusCredit Balance indicates credits used for bursting in the absence of earned credits. CPUSurplusCreditsCharged indicates credits that are charged when average usage exceeds the baseline.

''Q: What happens to CPU performance if my T2 instance is running low on credits (CPU Credit balance is near zero)?''

If your T2 instance has a zero CPU Credit balance, performance will remain at baseline CPU performance. For example, the t2.micro provides baseline CPU performance of 10% of a physical CPU core. If your instance’s CPU Credit balance is approaching zero, CPU performance will be lowered to baseline performance over a 15-minute interval.

''Q: Does my T2 instance credit balance persist at stop / start?''

No, a stopped instance does not retain its previously earned credit balance.

''Q: Can T2 instances be purchased as Reserved Instances or Spot Instances?''

T2 instances can be purchased as On-Demand Instances, Reserved Instances or Spot Instances.

!High Memory instances
''Q. What are EC2 High Memory instances?''

Amazon EC2 High Memory instances offer 6 TB, 9 TB, or 12 TB of memory in a single instance. These instances are designed to run large in-memory databases, including production installations of SAP HANA, in the cloud. EC2 High Memory instances are the first Amazon EC2 instances powered by an 8-socket platform with latest generation Intel® Xeon® Platinum 8176M (Skylake) processors that are optimized for mission-critical enterprise workloads. EC2 High Memory instances deliver high networking throughput and low-latency with 25 Gbps of aggregate network bandwidth using Amazon Elastic Network Adapter (ENA)-based Enhanced Networking. EC2 High Memory instances are EBS-Optimized by default, and support encrypted and unencrypted EBS volumes.
''Q. Are High Memory instances certified by SAP to run SAP HANA workloads?''

High Memory instances are certified by SAP for running Business Suite on HANA, the next-generation Business Suite S/4HANA, Data Mart Solutions on HANA, Business Warehouse on HANA, and SAP BW/4HANA in production environments.

''Q. Which instances are available within High Memory instance category?''

Three High Memory instances are available. u-6tb1.metal offers 6 TB memory; u-9tb1.metal offers 9 TB memory; and u-12tb1.metal offers 12 TB memory. Each High Memory instance offers 448 logical processors, where each logical processor is a hyperthread on the 8-socket platform with total of 224 CPU cores.

''Q. What are the storage options available with High Memory instances?''

High Memory instances support Amazon EBS volumes for storage. High Memory instances are EBS-optimized by default, and offer up to 14 Gbps of storage bandwidth to both encrypted and unencrypted EBS volumes.

''Q. Which storage interface is supported on High Memory instances?''

High Memory instances access EBS volumes via PCI attached NVM Express (NVMe) interfaces. EBS volumes attached to High Memory instances appear as NVMe devices. NVMe is an efficient and scalable storage interface, which is commonly used for flash based SSDs and provides latency reduction and results in increased disk I/O and throughput. The EBS volumes are attached and detached by PCI hotplug.

''Q. What network performance is supported on High Memory instances?''

High Memory instances use the Elastic Network Adapter (ENA) for networking and enable Enhanced Networking by default. With ENA, High Memory instances can utilize up to 25 Gbps of network bandwidth.

''Q. Can I run High Memory instances in my existing Amazon Virtual Private Cloud (VPC)?''

You can run High Memory instances in your existing and new Amazon VPCs.

''Q. What is the underlying hypervisor on High Memory instances?''

High Memory instances are EC2 bare metal instances, and do not run on a hypervisor. These instances allow the operating systems to run directly on the underlying hardware, while still providing access to the benefits of the cloud.

''Q. Do High Memory instances enable CPU power management state control?''

Yes. You can configure C-states and P-states on High Memory instances. You can use C-states to enable higher turbo frequencies (as much as 3.8 GHz). You can also use P-states to lower performance variability by pinning all cores at P1 or higher P states, which is similar to disabling Turbo, and running consistently at the base CPU clock speed.

''Q. What purchase options are available for High Memory instances?''

High Memory instances are available on EC2 Dedicated Hosts on a 3-year Reservation. After the 3-year reservation expires, you can continue using the host at an hourly rate or release it anytime.

''Q. What is the lifecycle of a Dedicated Host?''

Once a Dedicated Host is allocated within your account, it will be standing by for your use. You can then launch an instance with a tenancy of "host" using the RunInstances API, and can also stop/start/terminate the instance through the API. You can use the AWS Management Console to manage the Dedicated Host and the instance. The Dedicated Host will be allocated to your account for the period of 3-year reservation. After the 3-year reservation expires, you can continue using the host or release it anytime.

''Q. Can I launch, stop/start, and terminate High Memory instances using AWS CLI/SDK?''

You can launch, stop/start, and terminate instances on your EC2 Dedicated Hosts using AWS CLI/SDK.

''Q. Which AMIs are supported with High memory instances?''

EBS-backed HVM AMIs with support for ENA networking can be used with High Memory instances. The latest Amazon Linux, Red Hat Enterprise Linux, SUSE Enterprise Linux Server, and Windows Server AMIs are supported. Operating system support for SAP HANA workloads on High Memory instances include: SUSE Linux Enterprise Server 12 SP3 for SAP, Red Hat Enterprise Linux 7.4 for SAP, and Red Hat Enterprise Linux 7.5 for SAP.

''Q. Are there standard SAP HANA reference deployment frameworks available for the High Memory instance and the AWS Cloud?''

You can use the AWS Quick Start reference HANA deployments to rapidly deploy all the necessary HANA building blocks on High Memory instances following SAP’s recommendations for high performance and reliability. AWS Quick Starts are modular and customizable, so you can layer additional functionality on top or modify them for your own implementations.

!Previous Generation instances
Q: Why don’t I see M1, C1, CC2 and HS1 instances on the pricing pages any more?
These have been moved to the Previous Generation Instance page.

''Q: Are these Previous Generation instances still being supported?''

Yes. Previous Generation instances are still fully supported.

''Q: Can I still use/add more Previous Generation instances?''

Yes. Previous Generation instances are still available as On-Demand, Reserved Instances, and Spot Instance, from our APIs, CLI and EC2 Management Console interface.

''Q: Are my Previous Generation instances going to be deleted?''

No. Your C1, C3, CC2, CR1, G2, HS1, M1, M2, M3, R3 and T1 instances are still fully functional and will not be deleted because of this change.

''Q: Are Previous Generation instances being discontinued soon?''

Currently, there are no plans to end of life Previous Generation instances. However, with any rapidly evolving technology the latest generation will typically provide the best performance for the price and we encourage our customers to take advantage of technological advancements.

''Q: Will my Previous Generation instances I purchased as a Reserved Instance be affected or changed?''

No. Your Reserved Instances will not change, and the Previous Generation instances are not going away.

!Memory Optimized instances
''Q. When should I use Memory-optimized instances?''

Memory-optimized instances offer large memory size for memory intensive applications including in-memory applications, in-memory databases, in-memory analytics solutions, High Performance Computing (HPC), scientific computing, and other memory-intensive applications.

''Q. When should I use X1 instances?''

X1 instances are ideal for running in-memory databases like SAP HANA, big data processing engines like Apache Spark or Presto, and high performance computing (HPC) applications. X1 instances are certified by SAP to run production environments of the next-generation Business Suite S/4HANA, Business Suite on HANA (SoH), Business Warehouse on HANA (BW), and Data Mart Solutions on HANA on the AWS cloud.

''Q. When should I use X1e instances?''

X1e instances are ideal for running in-memory databases like SAP HANA, high-performance databases and other memory optimized enterprise applications. X1e instances offer twice the memory per vCPU compared to the X1 instances. The x1e.32xlarge instance is certified by SAP to run production environments of the next-generation Business Suite S/4HANA, Business Suite on HANA (SoH), Business Warehouse on HANA (BW), and Data Mart Solutions on HANA on the AWS Cloud.

''Q. How do X1 and X1e instances differ?''

X1e instances offer 32GB of memory per vCPU whereas X1 instances offer 16GB of memory per vCPU. X1e instance sizes enable six instance configurations starting from 4 vCPUs and 122 GiB memory up to 128 vCPUs and 3,904 GiB of memory. X1 instances enable two instance configurations, 64 vCPUs with 976 GiB memory and 128 vCPUs with 1,952 GiB memory.

''Q. What are the key specifications of Intel E7 (codenamed Haswell) processors that power X1 and X1e instances?''

The E7 processors have a high core count to support workloads that scale efficiently on large number of cores. The Intel E7 processors also feature high memory bandwidth and larger L3 caches to boost the performance of in-memory applications. In addition, the Intel E7 processor:

Enables increased cryptographic performance via the latest Intel AES-NI feature.
Supports Transactional Synchronization Extensions (TSX) to boost the performance of in-memory transactional data processing.
Supports Advanced Vector Extensions 2 (Intel AVX2) processor instructions to expand most integer commands to 256 bits.
''Q. Do X1 and X1e instances enable CPU power management state control''

Yes. You can configure C-states and P-states on x1e.32xlarge, x1e.16xlarge, x1e.8xlarge, x1.32xlarge and x1.16xlarge instances. You can use C-states to enable higher turbo frequencies (as much as 3.1 GHz with one or two core turbo). You can also use P-states to lower performance variability by pinning all cores at P1 or higher P states, which is similar to disabling Turbo, and running consistently at the base CPU clock speed.

''Q: What operating systems are supported on X1 and X1e instances?''

X1 and X1e instances provide high number of vCPUs, which might cause launch issues in some Linux operating systems that have a lower vCPU limit. We strongly recommend that you use the latest AMIs when you launch these instances.

AMI support for SAP HANA workloads include: SUSE Linux 12, SUSE Linux 12 SP1, SLES for SAP 12 SP1, SLES for SAP 12 SP2, and RHEL 7.2 for SAP HANA.

x1e.32xlarge will also support Windows Server 2012 R2 and 2012 RTM. x1e.xlarge, x1e.2xlarge, x1e.4xlarge, x1e.8xlarge, x1e.16xlarge and x1.32xlarge will also support Windows Server 2012 R2, 2012 RTM and 2008 R2 64bit (Windows Server 2008 SP2 and older versions will not be supported) and x1.16xlarge will support Windows Server 2012 R2, 2012 RTM, 2008 R2 64bit, 2008 SP2 64bit, and 2003 R2 64bit (Windows Server 32bit versions will not be supported).

''Q. What storage options are available for X1 customers?''

X1 instances offer SSD based instance store, which is ideal for temporary storage of information such as logs, buffers, caches, temporary tables, temporary computational data, and other temporary content. X1 instance store provides the best I/O performance when you use a Linux kernel that supports persistent grants, an extension to the Xen block ring protocol.

X1 instances are EBS-optimized by default and offer up to 14 Gbps of dedicated bandwidth to EBS volumes. EBS offers multiple volume types to support a wide variety of workloads. For more information see the EC2 User Guide.

''Q. How do I build cost-effective failover solution on X1 and X1e instances?''

You can design simple and cost-effective failover solutions on X1 instances using Amazon EC2 Auto Recovery, an Amazon EC2 feature that is designed to better manage failover upon instance impairment. You can enable Auto Recovery for X1 instances by creating an AWS CloudWatch alarm. Choose the “EC2 Status Check Failed (System)” metric and select the “Recover this instance” action. Instance recovery is subject to underlying limitations, including those reflected in the Instance Recovery Troubleshooting documentation. For more information visit Auto Recovery documentation and Creating Amazon CloudWatch Alarms respectively.

''Q. Are there standard SAP HANA reference deployment frameworks available for the X1 instance and the AWS Cloud?''

You can use the AWS Quick Start reference HANA deployments to rapidly deploy all the necessary HANA building blocks on X1 instances following SAP’s recommendations for high performance and reliability. AWS Quick Starts are modular and customizable, so you can layer additional functionality on top or modify them for your own implementations. For additional information on deploying HANA on AWS, please refer to SAP HANA on AWS Cloud: Quick Start Reference Deployment Guide.

!Storage Optimized instances
''Q. What is a Dense-storage Instance?''

Dense-storage instances are designed for workloads that require high sequential read and write access to very large data sets, such as Hadoop distributed computing, massively parallel processing data warehousing, and log processing applications. The Dense-storage instances offer the best price/GB-storage and price/disk-throughput across other EC2 instances.

''Q. How do Dense-storage and HDD-storage instances compare to High I/O instances?''

High I/O instances (I2) are targeted at workloads that demand low latency and high random I/O in addition to moderate storage density and provide the best price/IOPS across other EC2 instance types. Dense-storage instances (D2) and HDD-storage instances (H1) are optimized for applications that require high sequential read/write access and low cost storage for very large data sets and provide the best price/GB-storage and price/disk-throughput across other EC2 instances.

''Q. How much disk throughput can Dense-storage and HDD-storage instances deliver?''

The largest current generation of Dense-storage instances, d2.8xlarge, can deliver up to 3.5 GBps read and 3.1 GBps write disk throughput with a 2 MiB block size. The largest H1 instances size, h1.16xlarge, can deliver up to 1.15 GBps read and write. To ensure the best disk throughput performance from your D2 instances on Linux, we recommend that you use the most recent version of the Amazon Linux AMI, or another Linux AMI with a kernel version of 3.8 or later that supports persistent grants - an extension to the Xen block ring protocol that significantly improves disk throughput and scalability.

''Q. Do Dense-storage and HDD-storage instances provide any failover mechanisms or redundancy?''

The primary data storage for Dense-storage instances is HDD-based instance storage. Like all instance storage, these storage volumes persist only for the life of the instance. Hence, we recommend that you build a degree of redundancy (e.g. RAID 1/5/6) or use file systems (e.g. HDFS and MapR-FS) that support redundancy and fault tolerance. You can also back up data periodically to more durable data storage solutions such as Amazon Simple Storage Service (S3) for additional data durability. Please refer to Amazon S3 for reference.

''Q. How do Dense-storage and HDD-storage instances differ from Amazon EBS?''

Amazon EBS offers simple, elastic, reliable (replicated), and persistent block level storage for Amazon EC2 while abstracting the details of the underlying storage media in use. Amazon EC2 instance storage provides directly attached non-persistent, high performance storage building blocks that can be used for a variety of storage applications. Dense-storage instances are specifically targeted at customers who want high sequential read/write access to large data sets on local storage, e.g. for Hadoop distributed computing and massively parallel processing data warehousing.

''Q. Can I launch H1 instances as Amazon EBS-optimized instances?''

Each H1 instance type is EBS-optimized by default. H1 instances offer 1,750 Mbps to 14,000 Mbps to EBS above and beyond the general-purpose network throughput provided to the instance. Since this feature is always enabled on H1 instances, launching a H1 instance explicitly as EBS-optimized will not affect the instance's behavior.

''Q. Can I launch D2 instances as Amazon EBS-optimized instances?''

Each D2 instance type is EBS-optimized by default. D2 instances 500 Mbps to 4,000 Mbps to EBS above and beyond the general-purpose network throughput provided to the instance. Since this feature is always enabled on D2 instances, launching a D2 instance explicitly as EBS-optimized will not affect the instance's behavior.

''Q. Are HDD-storage instances offered in EC2 Classic?''

The current generation of HDD-storage instances (H1 instances) can only be launched in Amazon VPC. With Amazon VPC, you can leverage a number of features that are available only on the Amazon VPC platform – such as enabling enhanced networking, assigning multiple private IP addresses to your instances, or changing your instances' security groups. For more information about the benefits of using a VPC, see Amazon EC2 and Amazon Virtual Private Cloud (Amazon VPC).

''Q. Are Dense-storage instances offered in EC2 Classic?''

The current generation of Dense-storage instances (D2 instances) can be launched in both EC2-Classic and Amazon VPC. However, by launching a Dense-storage instance into a VPC, you can leverage a number of features that are available only on the Amazon VPC platform – such as enabling enhanced networking, assigning multiple private IP addresses to your instances, or changing your instances' security groups. For more information about the benefits of using a VPC, see Amazon EC2 and Amazon Virtual Private Cloud (Amazon VPC). You can take steps to migrate your resources from EC2-Classic to Amazon VPC. For more information, see Migrating a Linux Instance from EC2-Classic to a VPC.

''Q. What is a High I/O instance?''

High I/O instances use NVMe based local instance storage to deliver very high, low latency, I/O capacity to applications, and are optimized for applications that require millions of IOPS. Like Cluster instances, High I/O instances can be clustered via cluster placement groups for low latency networking.

''Q. Are all features of Amazon EC2 available for High I/O instances?''

High I/O instances support all Amazon EC2 features. I3 and I3en instances offer NVMe only storage, while previous generation I2 instances allow legacy blkfront storage access. Currently you can only purchase High I/O instances as On-Demand, Reserved Instances or as Spot instances.

''Q. Is there a limit on the number of High I/O instances I can use?''

Currently, you can launch 2 i3.16xlarge instances by default. If you wish to run more than 2 On-Demand instances, please complete the Amazon EC2 instance request form.

''Q. How many IOPS can i3.16.xlarge instances deliver?''

Using HVM AMIs, High I/O I3 instances can deliver up to 3.3 million IOPS measured at 100% random reads using 4KB block size, and up to 300,000 100% random write IOPs, measured at 4KB block sizes to applications across 8 x 1.9 TB NVMe devices.

''Q. What is the sequential throughput of i3 instances?''

The maximum sequential throughput, measured at 128K block sizes is 16 GB/s read throughput and 6.4 GB/s write throughput.

''Q. AWS has other database and Big Data offerings. When or why should I use High I/O instances?''

High I/O instances are ideal for applications that require access to millions of low latency IOPS, and can leverage data stores and architectures that manage data redundancy and availability. Example applications are:

NoSQL databases like Cassandra and MongoDB
In-memory databases like Aerospike
Elasticsearch and analytics workloads
OLTP systems
''Q. Do High I/O instances provide any failover mechanisms or redundancy?''

Like other Amazon EC2 instance types, instance storage on I3 and I3en instances persists during the life of the instance. Customers are expected to build resilience into their applications. We recommend using databases and file systems that support redundancy and fault tolerance. Customers should back up data periodically to Amazon S3 for improved data durability.

''Q. Do High I/O instances support TRIM?''

The TRIM command allows the operating system to inform SSDs which blocks of data are no longer considered in use and can be wiped internally. In the absence of TRIM, future write operations to the involved blocks can slow down significantly. I3 and I3en instances support TRIM.

''Q. How many IOPS can I3en.24xlarge instances deliver?''

Using HVM AMIs, high I/O I3en instances can deliver up to 2 million IOPS measured at 100% random reads using 4KB block sizes, and up to 1.6 million 100% random write IOPs, measured at 4KB block sizes to applications across 8 x 7.5 TB NVMe devices.

''Q. What is the sequential throughput of I3en instances?''

The maximum sequential throughput, measured at 128K block sizes is 16 GB/s read throughput and 8 GB/s write throughput.
!Storage
Amazon Elastic Block Store (EBS) | Amazon Elastic File System (EFS) | NVMe Instance storage

Amazon Elastic Block Store (EBS)
''Q: What happens to my data when a system terminates?''

The data stored on a local instance store will persist only as long as that instance is alive. However, data that is stored on an Amazon EBS volume will persist independently of the life of the instance. Therefore, we recommend that you use the local instance store for temporary data and, for data requiring a higher level of durability, we recommend using Amazon EBS volumes or backing up the data to Amazon S3. If you are using an Amazon EBS volume as a root partition, you will need to set the Delete On Terminate flag to "N" if you want your Amazon EBS volume to persist outside the life of the instance.

''Q: What kind of performance can I expect from Amazon EBS volumes?''

Amazon EBS provides four current generation volume types and are divided into two major categories: SSD-backed storage for transactional workloads and HDD-backed storage for throughput intensive workloads. These volume types differ in performance characteristics and price, allowing you to tailor your storage performance and cost to the needs of your applications. For more information on see the EBS product details page, and for additional information on performance, see the Amazon EC2 User Guide's EBS Performance section.

''Q: What are Throughput Optimized HDD (st1) and Cold HDD (sc1) volume types?''

ST1 volumes are backed by hard disk drives (HDDs) and are ideal for frequently accessed, throughput intensive workloads with large datasets and large I/O sizes, such as MapReduce, Kafka, log processing, data warehouse, and ETL workloads. These volumes deliver performance in terms of throughput, measured in MB/s, and include the ability to burst up to 250 MB/s per TB, with a baseline throughput of 40 MB/s per TB and a maximum throughput of 500 MB/s per volume. ST1 is designed to deliver the expected throughput performance 99% of the time and has enough I/O credits to support a full-volume scan at the burst rate.

SC1 volumes are backed by hard disk drives (HDDs) and provides the lowest cost per GB of all EBS volume types. It is ideal for less frequently accessed workloads with large, cold datasets. Similar to st1, sc1 provides a burst model: these volumes can burst up to 80 MB/s per TB, with a baseline throughput of 12 MB/s per TB and a maximum throughput of 250 MB/s per volume. For infrequently accessed data, sc1 provides extremely inexpensive storage. SC1 is designed to deliver the expected throughput performance 99% of the time and has enough I/O credits to support a full-volume scan at the burst rate.

To maximize the performance of st1 and sc1, we recommend using EBS-optimized EC2 instances.

''Q: Which volume type should I choose?''

Amazon EBS includes two major categories of storage: SSD-backed storage for transactional workloads (performance depends primarily on IOPS) and HDD-backed storage for throughput workloads (performance depends primarily on throughput, measured in MB/s). SSD-backed volumes are designed for transactional, IOPS-intensive database workloads, boot volumes, and workloads that require high IOPS. SSD-backed volumes include Provisioned IOPS SSD (io1) and General Purpose SSD (gp2). HDD-backed volumes are designed for throughput-intensive and big-data workloads, large I/O sizes, and sequential I/O patterns. HDD-backed volumes include Throughput Optimized HDD (st1) and Cold HDD (sc1). For more information on Amazon EBS see the EBS product details page.

''Q: Do you support multiple instances accessing a single volume?''

While you are able to attach multiple volumes to a single instance, attaching multiple instances to one volume is not supported at this time.

''Q: Will I be able to access my EBS snapshots using the regular Amazon S3 APIs?''

No, EBS snapshots are only available through the Amazon EC2 APIs.

''Q: Do volumes need to be un-mounted in order to take a snapshot? Does the snapshot need to complete before the volume can be used again?''

No, snapshots can be done in real time while the volume is attached and in use. However, snapshots only capture data that has been written to your Amazon EBS volume, which might exclude any data that has been locally cached by your application or OS. In order to ensure consistent snapshots on volumes attached to an instance, we recommend cleanly detaching the volume, issuing the snapshot command, and then reattaching the volume. For Amazon EBS volumes that serve as root devices, we recommend shutting down the machine to take a clean snapshot.

''Q: Are snapshots versioned? Can I read an older snapshot to do a point-in-time recovery?''

Each snapshot is given a unique identifier, and customers can create volumes based on any of their existing snapshots.

Q: What charges apply when using Amazon EBS shared snapshots?

If you share a snapshot, you won’t be charged when other users make a copy of your snapshot. If you make a copy of another user’s shared volume, you will be charged normal EBS rates.

Q: Can users of my Amazon EBS shared snapshots change any of my data?

Users who have permission to create volumes based on your shared snapshots will first make a copy of the snapshot into their account. Users can modify their own copies of the data, but the data on your original snapshot and any other volumes created by other users from your original snapshot will remain unmodified.

Q: How can I discover Amazon EBS snapshots that have been shared with me?

You can find snapshots that have been shared with you by selecting “Private Snapshots” from the viewing dropdown in the Snapshots section of the AWS Management Console. This section will list both snapshots you own and snapshots that have been shared with you.

Q: How can I find what Amazon EBS snapshots are shared globally?

You can find snapshots that have been shared globally by selecting “Public Snapshots” from the viewing dropdown in the Snapshots section of the AWS Management Console.

Q: Do you offer encryption on Amazon EBS volumes and snapshots?

Yes. EBS offers seamless encryption of data volumes and snapshots. EBS encryption better enables you to meet security and encryption compliance requirements.

Q: How can I find a list of Amazon Public Data Sets?

All information on Public Data Sets is available in our Public Data Sets Resource Center. You can also obtain a listing of Public Data Sets within the AWS Management Console by choosing “Amazon Snapshots” from the viewing dropdown in the Snapshots section.

''Q: Where can I learn more about EBS?''

You can visit the Amazon EBS FAQ page.
!Amazon Elastic File System (EFS)
''Q. How do I access a file system from an Amazon EC2 instance?''

To access your file system, you mount the file system on an Amazon EC2 Linux-based instance using the standard Linux mount command and the file system’s DNS name. Once you’ve mounted, you can work with the files and directories in your file system just like you would with a local file system.

Amazon EFS uses the NFSv4.1 protocol. For a step-by-step example of how to access a file system from an Amazon EC2 instance, please see the Amazon EFS Getting Started guide.

''Q. What Amazon EC2 instance types and AMIs work with Amazon EFS?''

Amazon EFS is compatible with all Amazon EC2 instance types and is accessible from Linux-based AMIs. You can mix and match the instance types connected to a single file system. For a step-by-step example of how to access a file system from an Amazon EC2 instance, please see the Amazon EFS Getting Started guide.

''Q. How do I load data into a file system?''

You can load data into an Amazon EFS file system from your Amazon EC2 instances or from your on-premises datacenter servers.

Amazon EFS file systems can be mounted on an Amazon EC2 instance, so any data that is accessible to an Amazon EC2 instance can also be read and written to Amazon EFS. To load data that is not currently stored on the Amazon cloud, you can use the same methods you use to transfer files to Amazon EC2 today, such as Secure Copy (SCP).

Amazon EFS file systems can also be mounted on an on-premises server, so any data that is accessible to an on-premises server can be read and written to Amazon EFS using standard Linux tools. For more information about accessing a file system from an on-premises server, please see the On-premises Access section of the Amazon EFS FAQ.

For more information about moving data to the Amazon cloud, please see the Cloud Data Migration page.

''Q. How do I access my file system from outside my VPC?''

Amazon EC2 instances within your VPC can access your file system directly, and Amazon EC2 Classic instances outside your VPC can mount a file system via ClassicLink. On-premises servers can mount your file systems via an AWS Direct Connect connection to your VPC.

''Q. How many Amazon EC2 instances can connect to a file system?''

Amazon EFS supports one to thousands of Amazon EC2 instances connecting to a file system concurrently.

''Q: Where can I learn more about EFS?''

You can visit the Amazon EFS FAQ page.

!NVMe Instance storage
''Q: Which instance types offer NVMe instance storage?''

Today, I3en, I3, C5d, M5d, M5ad, R5d, R5ad, z1d, and F1 instances offer NVMe instance storage.

''Q: Is data stored on Amazon EC2 NVMe instance storage encrypted?''

Yes, all data is encrypted in an AWS Nitro hardware module prior to being written on the locally attached SSDs offered via NVMe instance storage.

''Q: What encryption algorithm is used to encrypt Amazon EC2 NVMe instance storage?''

Amazon EC2 NVMe instance storage is encrypted using an XTS-AES-256 block cipher.

''Q: Are encryption keys unique to an instance or a particular device for NVMe instance storage?''

Encryption keys are securely generated within the Nitro hardware module, and are unique to each NVMe instance storage device that is provided with an EC2 instance.

''Q: What is the lifetime of encryption keys on NVMe instance storage?''

All keys are irrecoverably destroyed on any de-allocation of the storage, including instance stop and instance terminate actions.

''Q: Can I disable NVMe instance storage encryption?''

No, NVMe instance storage encryption is always on, and cannot be disabled.

''Q: Do the published IOPS performance numbers on I3 and I3en include data encryption?''

Yes, the documented IOPS numbers for I3 and I3en NVMe instance storage include encryption.

''Q: Does Amazon EC2 NVMe instance storage support AWS Key Management Service (KMS)?''

No, disk encryption on NVMe instance storage does not support integration with AWS KMS system. Customers cannot bring in their own keys to use with NVMe instance storage. 

!Networking and security
Elastic Fabric Adapter (EFA) | Elastic IP | Elastic Load Balancing | Enhanced networking | Security

Elastic Fabric Adapter (EFA)
''Q: Why should I use EFA?''

EFA brings the scalability, flexibility, and elasticity of cloud to tightly-coupled HPC applications. With EFA, tightly-coupled HPC applications have access to lower and more consistent latency and higher throughput than traditional TCP channels, enabling them to scale better. EFA support can be enabled dynamically, on-demand on any supported EC2 instance without pre-reservation, giving you the flexibility to respond to changing business/workload priorities.

''Q: What types of applications can benefit from using EFA?''

High Performance Computing (HPC) applications distribute computational workloads across a cluster of instances for parallel processing. Examples of HPC applications include computational fluid dynamics (CFD), crash simulations, and weather simulations. HPC applications are generally written using the Message Passing Interface (MPI) and impose stringent requirements for inter-instance communication in terms of both latency and bandwidth. Applications using MPI and other HPC middleware which supports the libfabric communication stack can benefit from EFA.

''Q: How does EFA communication work?''

EFA devices provide all ENA devices functionalities plus a new OS bypass hardware interface that allows user-space applications to communicate directly with the hardware-provided reliable transport functionality. Most applications will use existing middleware, such as the Message Passing Interface (MPI), to interface with EFA. AWS has worked with a number of middleware providers to ensure support for the OS bypass functionality of EFA. Please note that communication using the OS bypass functionality is limited to instances within a single subnet of a Virtual Private Cloud (VPC).

''Q: Which instance types support EFA?''

EFA is currently available on the C5n.18xlarge, and P3dn.24xl, and I3en.24xl instance sizes. Support for more instance types and sizes being added in the coming months.

''Q: What are the differences between an EFA ENI and an ENA ENI?''

An ENA ENI provides traditional IP networking features necessary to support VPC networking. An EFA ENI provides all the functionality of an ENA ENI, plus hardware support for applications to communicate directly with the EFA ENI without involving the instance kernel (OS-bypass communication) using an extended programming interface. Due to the advanced capabilities of the EFA ENI, EFA ENIs can only be attached at launch or to stopped instances.

''Q: What are the pre-requisites to enabling EFA on an instance?''

EFA support can be enabled either at the launch of the instance or added to a stopped instance. EFA devices cannot be attached to a running instance.

!Elastic IP
''Q: Why am I limited to 5 Elastic IP addresses per region?''

Public (IPV4) internet addresses are a scarce resource. There is only a limited amount of public IP space available, and Amazon EC2 is committed to helping use that space efficiently.

By default, all accounts are limited to 5 Elastic IP addresses per region. If you need more the 5 Elastic IP addresses, we ask that you apply for your limit to be raised. We will ask you to think through your use case and help us understand your need for additional addresses. You can apply for more Elastic IP address here. Any increases will be specific to the region they have been requested for.

''Q: Why am I charged when my Elastic IP address is not associated with a running instance?''

In order to help ensure our customers are efficiently using the Elastic IP addresses, we impose a small hourly charge for each address when it is not associated to a running instance.

''Q: Do I need one Elastic IP address for every instance that I have running?''

No. You do not need an Elastic IP address for all your instances. By default, every instance comes with a private IP address and an internet routable public IP address. The private IP address remains associated with the network interface when the instance is stopped and restarted, and is released when the instance is terminated. The public address is associated exclusively with the instance until it is stopped, terminated or replaced with an Elastic IP address. These IP addresses should be adequate for many applications where you do not need a long lived internet routable end point. Compute clusters, web crawling, and backend services are all examples of applications that typically do not require Elastic IP addresses.

''Q: How long does it take to remap an Elastic IP address?''

The remap process currently takes several minutes from when you instruct us to remap the Elastic IP until it fully propagates through our system.

''Q: Can I configure the reverse DNS record for my Elastic IP address?''

All Elastic IP addresses come with reverse DNS, in a standard template of the form ec2-1-2-3-4.region.compute.amazonaws.com. For customers requiring custom reverse DNS settings for internet-facing applications that use IP-based mutual authentication (such as sending email from EC2 instances), you can configure the reverse DNS record of your Elastic IP address by filling out this form. Alternatively, please contact AWS Customer Support if you want AWS to delegate the management of the reverse DNS for your Elastic IPs to your authoritative DNS name servers (such as Amazon Route 53), so that you can manage your own reverse DNS PTR records to support these use-cases. Note that a corresponding forward DNS record pointing to that Elastic IP address must exist before we can create the reverse DNS record.

!Elastic Load Balancing
''Q: What load balancing options does the Elastic Load Balancing service offer?''

Elastic Load Balancing offers two types of load balancers that both feature high availability, automatic scaling, and robust security. These include the Classic Load Balancer that routes traffic based on either application or network level information, and the Application Load Balancer that routes traffic based on advanced application level information that includes the content of the request.

''Q: When should I use the Classic Load Balancer and when should I use the Application Load Balancer?''

The Classic Load Balancer is ideal for simple load balancing of traffic across multiple EC2 instances, while the Application Load Balancer is ideal for applications needing advanced routing capabilities, microservices, and container-based architectures. Please visit Elastic Load Balancing for more information.

!Enhanced networking
''Q: What networking capabilities are included in this feature?''

We currently support enhanced networking capabilities using SR-IOV (Single Root I/O Virtualization). SR-IOV is a method of device virtualization that provides higher I/O performance and lower CPU utilization compared to traditional implementations. For supported Amazon EC2 instances, this feature provides higher packet per second (PPS) performance, lower inter-instance latencies, and very low network jitter.

''Q: Why should I use Enhanced Networking?''

If your applications benefit from high packet-per-second performance and/or low latency networking, Enhanced Networking will provide significantly improved performance, consistence of performance and scalability.

''Q: How can I enable Enhanced Networking on supported instances?''

In order to enable this feature, you must launch an HVM AMI with the appropriate drivers. C5, C5d, F1, G3, H1, I3, I3en, m4.16xlarge, M5, M5a, M5ad, M5d, P2, P3, R4, R5, R5a, R5ad, R5d, T3, T3a, X1, X1e, and z1d instances use the Elastic Network Adapter (which uses the “ena” Linux driver) for Enhanced Networking. C3, C4, D2, I2, M4 (excluding m4.16xlarge), and R3 instances use Intel® 82599g Virtual Function Interface (which uses the “ixgbevf” Linux driver). Amazon Linux AMI includes both of these drivers by default. For AMIs that do not contain these drivers, you will need to download and install the appropriate drivers based on the instance types you plan to use. You can use Linux or Windows instructions to enable Enhanced Networking in AMIs that do not include the SR-IOV driver by default. Enhanced Networking is only supported in Amazon VPC.

''Q: Do I need to pay an additional fee to use Enhanced Networking?''

No, there is no additional fee for Enhanced Networking. To take advantage of Enhanced Networking you need to launch the appropriate AMI on a supported instance type in a VPC.

''Q: Why is Enhanced Networking only supported in Amazon VPC?''

Amazon VPC allows us to deliver many advanced networking features to you that are not possible in EC2-Classic. Enhanced Networking is another example of a capability enabled by Amazon VPC.

''Q: Which instance types support Enhanced Networking?''

Depending on your instance type, enhanced networking can be enabled using one of the following mechanisms:

Intel 82599 Virtual Function (VF) interface - The Intel 82599 Virtual Function interface supports network speeds of up to 10 Gbps for supported instance types. C3, C4, D2, I2, M4 (excluding m4.16xlarge), and R3 instances use the Intel 82599 VF interface for enhanced networking.

Elastic Network Adapter (ENA) - The Elastic Network Adapter (ENA) supports network speeds of up to 25 Gbps for supported instance types. C5, C5d, F1, G3, H1, I3, I3en, m4.16xlarge, M5, M5a, M5ad, M5d, P2, P3, R4, R5, R5a, R5ad, R5d, T3, X1, X1e, and z1d instances use the Elastic Network Adapter for enhanced networking.

''Q. Which instance types offer NVMe instance storage?''

High I/O instances use NVMe based local instance storage to deliver very high, low latency, I/O capacity to applications, and are optimized for applications that require millions of IOPS. Like Cluster instances, High I/O instances can be clustered via cluster placement groups for high bandwidth networking.

!Security
''Q: How do I prevent other people from viewing my systems?''

You have complete control over the visibility of your systems. The Amazon EC2 security systems allow you to place your running instances into arbitrary groups of your choice. Using the web services interface, you can then specify which groups may communicate with which other groups, and also which IP subnets on the Internet may talk to which groups. This allows you to control access to your instances in our highly dynamic environment. Of course, you should also secure your instance as you would any other server.

''Q: Can I get a history of all EC2 API calls made on my account for security analysis and operational troubleshooting purposes?''

Yes. To receive a history of all EC2 API calls (including VPC and EBS) made on your account, you simply turn on CloudTrail in the AWS Management Console. For more information, visit the CloudTrail home page.

''Q: Where can I find more information about security on AWS?''

For more information on security on AWS please refer to our Amazon Web Services: Overview of Security Processes white paper and to our Amazon EC2 running Windows Security Guide. 

!Management
Amazon CloudWatch | Amazon EC2 Auto Scaling | Hibernate | VM Import/Export

!Amazon CloudWatch
''Q: What is the minimum time interval granularity for the data that Amazon CloudWatch receives and aggregates?''

Metrics are received and aggregated at 1 minute intervals.

''Q: Which operating systems does Amazon CloudWatch support?''

Amazon CloudWatch receives and provides metrics for all Amazon EC2 instances and should work with any operating system currently supported by the Amazon EC2 service.

''Q: Will I lose the metrics data if I disable monitoring for an Amazon EC2 instance?''

You can retrieve metrics data for any Amazon EC2 instance up to 2 weeks from the time you started to monitor it. After 2 weeks, metrics data for an Amazon EC2 instance will not be available if monitoring was disabled for that Amazon EC2 instance. If you want to archive metrics beyond 2 weeks you can do so by calling mon-get-stats command from the command line and storing the results in Amazon S3 or Amazon SimpleDB.

''Q: Can I access the metrics data for a terminated Amazon EC2 instance or a deleted Elastic Load Balancer?''

Yes. Amazon CloudWatch stores metrics for terminated Amazon EC2 instances or deleted Elastic Load Balancers for 2 weeks.

''Q: Does the Amazon CloudWatch monitoring charge change depending on which type of Amazon EC2 instance I monitor?''

No, the Amazon CloudWatch monitoring charge does not vary by Amazon EC2 instance type.

''Q: Why does the graphing of the same time window look different when I view in 5 minute and 1 minute periods?''

If you view the same time window in a 5 minute period versus a 1 minute period, you may see that data points are displayed in different places on the graph. For the period you specify in your graph, Amazon CloudWatch will find all the available data points and calculates a single, aggregate point to represent the entire period. In the case of a 5 minute period, the single data point is placed at the beginning of the 5 minute time window. In the case of a 1 minute period, the single data point is placed at the 1 minute mark. We recommend using a 1 minute period for troubleshooting and other activities that require the most precise graphing of time periods.

!Amazon EC2 Auto Scaling
''Q: Can I automatically scale my Amazon EC2 fleets?''

Yes. Amazon EC2 Auto Scaling is a fully managed service designed to launch or terminate Amazon EC2 instances automatically to help ensure you have the correct number of Amazon EC2 instances available to handle the load for your application. EC2 Auto Scaling helps you maintain application availability through fleet management for EC2 instances, which detects and replaces unhealthy instances, and by scaling your Amazon EC2 capacity up or down automatically according to conditions you define. You can use EC2 Auto Scaling to automatically increase the number of Amazon EC2 instances during demand spikes to maintain performance and decrease capacity during lulls to reduce costs. For more information see the Amazon EC2 Auto Scaling FAQ

!Hibernate
''Q: Why should I hibernate an instance?''

You can hibernate an instance to get your instance and applications up and running quickly, if they take long time to bootstrap (e.g. load memory caches). You can start instances, bring them to a desired state and hibernate them. These “pre-warmed” instances can then be resumed to reduce the time it takes for an instance to return to service. Hibernation retains memory state across Stop/Start cycles.

''Q: What happens when I hibernate my instance?''

When you hibernate an instance, data from your EBS root volume and any attached EBS data volumes is persisted. Additionally, contents from the instance’s memory (RAM) are persisted to EBS root volume. When the instance is restarted, it returns to its previous state and reloads the RAM contents.

''Q: What is the difference between hibernate and stop?''

In the case of hibernate, your instance gets hibernated and the RAM data persisted. In the case of Stop, your instance gets shutdown and RAM is cleared.

In both the cases, data from your EBS root volume and any attached EBS data volumes is persisted. Your private IP address remains the same, as does your elastic IP address (if applicable). The network layer behavior will be similar to that of EC2 Stop-Start workflow. Stop and hibernate are available for Amazon EBS backed instances only. Local instance storage is not persisted.

''Q: How much does it cost to hibernate an instance?''

Hibernating instances are charged at standard EBS rates for storage. As with a stopped instance, you do not incur instance usage fees while an instance is hibernating.

''Q: How can I hibernate an instance?''

Hibernation needs to be enabled when you launch the instance. Once enabled, you can use the StopInstances API with an additional ‘Hibernate’ parameter to trigger hibernation. You can also do this through the console by selecting your instance, then clicking Actions> Instance State > Stop - Hibernate. For more information on using hibernation, refer to the user guide.

''Q: How can I resume a hibernating instance?''

You can resume by calling the StartInstances API as you would for a regular stopped instance. You can also do this through the console by selecting your instance, then clicking Actions > Instance State > Start

''Q: Can I enable hibernation on an existing instance?''

No, you cannot enable hibernation on an existing instance (running or stopped). This needs to be enabled during instance launch.

''Q: How can I tell that an instance is hibernated?''

You can tell that an instance is hibernated by looking at the state reason. It should be ‘Client.UserInitiatedHibernate’. This is visible on the console under “Instances - Details” view or in the DescribeInstances API response as “reason” field.

''Q: What is the state of an instance when it is hibernating?''

Hibernated instances are in ‘Stopped’ state.

''Q: What data is saved when I hibernate an instance?''

EBS volume storage (boot volume and attached data volumes) and memory (RAM) are saved. Your private IP address remains the same (for VPC), as does your elastic IP address (if applicable). The network layer behavior will be similar to that of EC2 Stop-Start workflow.

''Q: Where is my data stored when I hibernate an instance?''

As with the Stop feature, root device and attached device data are stored on the corresponding EBS volumes. Memory (RAM) contents are stored on the EBS root volume.

''Q: Is my memory (RAM) data encrypted when it is moved to EBS?''

Yes, RAM data is always encrypted when it is moved to the EBS root volume. Encryption on the EBS root volume is enforced at instance launch time. This is to ensure protection for any sensitive content that is in memory at the time of hibernation.

''Q: How long can I keep my instance hibernated?''

We do not support keeping an instance hibernated for more than 60 days. You need to resume the instance and go through Stop and Start (without hibernation) if you wish to keep the instance around for a longer duration.
We are constantly working to keep our platform up-to-date with upgrades and security patches, some of which can conflict with the old hibernated instances. We will notify you for critical updates that require you to resume the hibernated instance to perform a shutdown or a reboot.

''Q: What are the prerequisites to hibernate an instance?''

To use hibernation, the root volume must be an encrypted EBS volume. The instance needs to be configured to receive the ACPID signal for hibernation (or use the Amazon published AMIs that are configured for hibernation). Additionally, your instance should have sufficient space available on your EBS root volume to write data from memory.

''Q: Which instances and operating systems support hibernation?''

Hibernation is currently supported across M3, M4, M5, C3, C4, C5, R3, R4, and R5 instances with less than 150 GB of RAM running Amazon Linux 1. To review the list of supported OS versions, refer to the user guide.

''Q: Should I use specific Amazon Machine Image (AMIs) if I want to hibernate my instance?''

You can use any AMI that is configured to support hibernation. You can use AWS published AMIs that support hibernation by default. Alternatively, you can create a custom image from an instance after following the hibernation pre-requisite checklist and configuring your instance appropriately.

''Q: What if my EBS root volume is not large enough to store memory state (RAM) for hibernate?''

To enable hibernation, space is allocated on the root volume to store the instance memory (RAM). Make sure that the root volume is large enough to store the RAM contents and accommodate your expected usage, e.g. OS, applications. If the EBS root volume does not enough space, hibernation will fail and the instance will get shutdown instead..

!VM Import/Export
''Q. What is VM Import/Export?''

VM Import/Export enables customers to import Virtual Machine (VM) images in order to create Amazon EC2 instances. Customers can also export previously imported EC2 instances to create VMs. Customers can use VM Import/Export to leverage their previous investments in building VMs by migrating their VMs to Amazon EC2.

''Q. What operating systems are supported?''

VM Import/Export currently supports Windows and Linux VMs, including Windows Server 2003, Windows Server 2003 R2, Windows Server 2008, Windows Server 2012 R1, Red Hat Enterprise Linux (RHEL) 5.1-6.5 (using Cloud Access), Centos 5.1-6.5, Ubuntu 12.04, 12.10, 13.04, 13.10, and Debian 6.0.0-6.0.8, 7.0.0-7.2.0. For more details on VM Import, including supported file formats, architectures, and operating system configurations, please see the VM Import/Export section of the Amazon EC2 User Guide.

''Q. What virtual machine file formats are supported?''

You can import VMware ESX VMDK images, Citrix Xen VHD images, Microsoft Hyper-V VHD images and RAW images as Amazon EC2 instances. You can export EC2 instances to VMware ESX VMDK, VMware ESX OVA, Microsoft Hyper-V VHD or Citrix Xen VHD images. For a full list of support operating systems, please see What operating systems are supported?.

''Q. What is VMDK?''

VMDK is a file format that specifies a virtual machine hard disk encapsulated within a single file. It is typically used by virtual IT infrastructures such as those sold by VMware, Inc.

''Q. How do I prepare a VMDK file for import using the VMware vSphere client?''

The VMDK file can be prepared by calling File-Export-Export to OVF template in VMware vSphere Client. The resulting VMDK file is compressed to reduce the image size and is compatible with VM Import/Export. No special preparation is required if you are using the Amazon EC2 VM Import Connector vApp for VMware vCenter.

''Q. What is VHD?''

VHD (Virtual Hard Disk) is a file format that that specifies a virtual machine hard disk encapsulated within a single file. The VHD image format is used by virtualization platforms such as Microsoft Hyper-V and Citrix Xen.

''Q. How do I prepare a VHD file for import from Citrix Xen?''

Open Citrix XenCenter and select the virtual machine you want to export. Under the Tools menu, choose "Virtual Appliance Tools" and select "Export Appliance" to initiate the export task. When the export completes, you can locate the VHD image file in the destination directory you specified in the export dialog.

''Q. How do I prepare a VHD file for import from Microsoft Hyper-V?''

Open the Hyper-V Manager and select the virtual machine you want to export. In the Actions pane for the virtual machine, select "Export" to initiate the export task. Once the export completes, you can locate the VHD image file in the destination directory you specified in the export dialog.

''Q. Are there any other requirements when importing a VM into Amazon EC2?''

The virtual machine must be in a stopped state before generating the VMDK or VHD image. The VM cannot be in a paused or suspended state. We suggest that you export the virtual machine with only the boot volume attached. You can import additional disks using the ImportVolume command and attach them to the virtual machine using AttachVolume. Additionally, encrypted disks (e.g. Bit Locker) and encrypted image files are not supported. You are also responsible for ensuring that you have all necessary rights and licenses to import into AWS and run any software included in your VM image.

''Q. Does the virtual machine need to be configured in any particular manner to enable import to Amazon EC2?''

Ensure Remote Desktop (RDP) or Secure Shell (SSH) is enabled for remote access and verify that your host firewall (Windows firewall, iptables, or similar), if configured, allows access to RDP or SSH. Otherwise, you will not be able to access your instance after the import is complete. Please also ensure that Windows VMs are configured to use strong passwords for all users including the administrator and that Linux VMs and configured with a public key for SSH access.

''Q. How do I import a virtual machine to an Amazon EC2 instance?''

You can import your VM images using the Amazon EC2 API tools:

Import the VMDK, VHD or RAW file via the ec2-import-instance API. The import instance task captures the parameters necessary to properly configure the Amazon EC2 instance properties (instance size, Availability Zone, and security groups) and uploads the disk image into Amazon S3.
If ec2-import-instance is interrupted or terminates without completing the upload, use ec2-resume-import to resume the upload. The import task will resume where it left off.
Use the ec2-describe-conversion-tasks command to monitor the import progress and obtain the resulting Amazon EC2 instance ID.
Once your import task is completed, you can boot the Amazon EC2 instance by specifying its instance ID to the ec2-run-instances API.
Finally, use the ec2-delete-disk-image command line tool to delete your disk image from Amazon S3 as it is no longer needed.
Alternatively, if you use the VMware vSphere virtualization platform, you can import your virtual machine to Amazon EC2 using a graphical user interface provided through AWS Management Portal for vCenter. Please refer to Getting Started Guide in AWS Management Portal for vCenter. AWS Management Portal for vCenter includes integrated support for VM Import. Once the portal is installed within vCenter, you can right-click on a VM and select “Migrate to EC2” to create an EC2 instance from the VM. The portal will handle exporting the VM from vCenter, uploading it to S3, and converting it into an EC2 instance for you, with no additional work required. You can also track the progress of your VM migrations within the portal.

''Q. How do I export an Amazon EC2 instance back to my on-premise virtualization environment?''

You can export your Amazon EC2 instance using the Amazon EC2 CLI tools:

Export the instance using the ec2-create-instance-export-task command. The export command captures the parameters necessary (instance ID, S3 bucket to hold the exported image, name of the exported image, VMDK, OVA or VHD format) to properly export the instance to your chosen format. The exported file is saved in an S3 bucket that you previously created
Use ec2-describe-export-tasks to monitor the export progress
Use ec2-cancel-export-task to cancel an export task prior to completion
''Q. Are there any other requirements when exporting an EC2 instance using VM Import/Export?''

You can export running or stopped EC2 instances that you previously imported using VM Import/Export. If the instance is running, it will be momentarily stopped to snapshot the boot volume. EBS data volumes cannot be exported. EC2 instances with more than one network interface cannot be exported.

''Q. Can I export Amazon EC2 instances that have one or more EBS data volumes attached?''

Yes, but VM Import/Export will only export the boot volume of the EC2 instance.

''Q. What does it cost to import a virtual machine?''

You will be charged standard Amazon S3 data transfer and storage fees for uploading and storing your VM image file. Once your VM is imported, standard Amazon EC2 instance hour and EBS service fees apply. If you no longer wish to store your VM image file in S3 after the import process completes, use the ec2-delete-disk-image command line tool to delete your disk image from Amazon S3.

''Q. What does it cost to export a virtual machine?''

You will be charged standard Amazon S3 storage fees for storing your exported VM image file. You will also be charged standard S3 data transfer charges when you download the exported VM file to your on-premise virtualization environment. Finally, you will be charged standard EBS charges for storing a temporary snapshot of your EC2 instance. To minimize storage charges, delete the VM image file in S3 after downloading it to your virtualization environment.

''Q. When I import a VM of Windows Server 2003 or 2008, who is responsible for supplying the operating system license?''

When you launch an imported VM using Microsoft Windows Server 2003 or 2008, you will be charged standard instance hour rates for Amazon EC2 running the appropriate Windows Server version, which includes the right to utilize that operating system within Amazon EC2. You are responsible for ensuring that all other installed software is properly licensed.

So then, what happens to my on-premise Microsoft Windows license key when I import a VM of Windows Server 2003 or 2008? Since your on-premise Microsoft Windows license key that was associated with that VM is not used when running your imported VM as an EC2 instance, you can reuse it for another VM within your on-premise environment.

''Q. Can I continue to use the AWS-provided Microsoft Windows license key after exporting an EC2 instance back to my on-premise virtualization environment?''

No. After an EC2 instance has been exported, the license key utilized in the EC2 instance is no longer available. You will need to reactivate and specify a new license key for the exported VM after it is launched in your on-premise virtualization platform.

''Q. When I import a VM with Red Hat Enterprise Linux (RHEL), who is responsible for supplying the operating system license?''

When you import Red Hat Enterprise Linux (RHEL) VM images, you can use license portability for your RHEL instances. With license portability, you are responsible for maintaining the RHEL licenses for imported instances, which you can do using Cloud Access subscriptions for Red Hat Enterprise Linux. Please contact Red Hat to learn more about Cloud Access and to verify your eligibility.

''Q. How long does it take to import a virtual machine?''

The length of time to import a virtual machine depends on the size of the disk image and your network connection speed. As an example, a 10 GB Windows Server 2008 SP2 VMDK image takes approximately 2 hours to import when it’s transferred over a 10 Mbps network connection. If you have a slower network connection or a large disk to upload, your import may take significantly longer.

''Q. In which Amazon EC2 regions can I use VM Import/Export?''

Visit the Region Table page to see product service availability by region.

''Q. How many simultaneous import or export tasks can I have?''

Each account can have up to five active import tasks and five export tasks per region.

''Q. Can I run imported virtual machines in Amazon Virtual Private Cloud (VPC)?''

Yes, you can launch imported virtual machines within Amazon VPC.

''Q. Can I use the AWS Management Console with VM Import/Export?''

No. VM Import/Export commands are available via EC2 CLI and API. You can also use the AWS Management Portal for vCenter to import VMs into Amazon EC2. Once imported, the resulting instances are available for use via the AWS Management Console.

!Billing and purchase options
Billing | Convertible Reserved Instances | EC2 Fleet | On-Demand Capacity Reservation | Reserved Instances | Reserved Instance Marketplace | Spot instances

!!Billing
''Q: How will I be charged and billed for my use of Amazon EC2?''

You pay only for what you use. Displayed pricing is an hourly rate but depending on which instances you choose, you pay by the hour or second (minimum of 60 seconds) for each instance type. Partial instance-hours consumed are billed based on instance usage. Data transferred between AWS services in different regions will be charged as Internet Data Transfer on both sides of the transfer. Usage for other Amazon Web Services is billed separately from Amazon EC2.

For EC2 pricing information, please visit the pricing section on the EC2 detail page.

''Q: When does billing of my Amazon EC2 systems begin and end?''

Billing commences when Amazon EC2 initiates the boot sequence of an AMI instance. Billing ends when the instance terminates, which could occur through a web services command, by running "shutdown -h", or through instance failure. When you stop an instance, we shut it down but don't charge hourly usage for a stopped instance, or data transfer fees, but we do charge for the storage for any Amazon EBS volumes. To learn more, visit the AWS Documentation.

''Q: What defines billable EC2 instance usage?''

Instance usages are billed for any time your instances are in a "running" state. If you no longer wish to be charged for your instance, you must "stop" or "terminate" the instance to avoid being billed for additional instance usage. Billing starts when an instance transitions into the running state.

''Q: If I have two instances in different availability zones, how will I be charged for regional data transfer?''

Each instance is charged for its data in and data out at corresponding Data Transfer rates. Therefore, if data is transferred between these two instances, it is charged at "Data Transfer Out from EC2 to Another AWS Region" for the first instance and at "Data Transfer In from Another AWS Region" for the second instance. Please refer to this page for detailed data transfer.

''Q. If I have two instances in different regions, how will I be charged for data transfer?''

Each instance is charged for its data in and data out at Internet Data Transfer rates. Therefore, if data is transferred between these two instances, it is charged at Internet Data Transfer Out for the first instance and at Internet Data Transfer In for the second instance.

''Q: How will my monthly bill show per-second versus per-hour?''

Although EC2 charges in your monthly bill will now be calculated based on a per second basis, for consistency, the monthly EC2 bill will show cumulative usage for each instance that ran in a given month in decimal hours. An example would be an instance running for 1 hour 10 minutes and 4 seconds would look like 1.1677. Read this blog for an example of the detailed billing report.

''Q: Do your prices include taxes?''

Except as otherwise noted, our prices are exclusive of applicable taxes and duties, including VAT and applicable sales tax. For customers with a Japanese billing address, use of AWS services is subject to Japanese Consumption Tax. Learn more.

!Convertible Reserved Instances
''Q: What is a Convertible RI?''

A Convertible RI is a type of Reserved Instance with attributes that can be changed during the term.

''Q: When should I purchase a Convertible RI instead of a Standard RI?''

The Convertible RI is useful for customers who can commit to using EC2 instances for a three-year term in exchange for a significant discount on their EC2 usage, are uncertain about their instance needs in the future, or want to benefit from changes in price.

''Q: What term length options are available on Convertible RIs?''

Like Standard RIs, Convertible RIs are available for purchase for a one-year or three-year term.

''Q: Can I exchange my Convertible RI to benefit from a Convertible RI matching a different instance type, operating system, tenancy, or payment option?''

Yes, you can select a new instance type, operating system, tenancy, or payment option when you exchange your Convertible RIs. You also have the flexibility to exchange a portion of your Convertible RI or merge the value of multiple Convertible RIs in a single exchange. Click here to learn more about exchanging Convertible RIs.

''Q: Can I transfer a Convertible or Standard RI from one region to another?''

No, a RI is associated with a specific region, which is fixed for the duration of the reservation's term.

''Q: How do I change the configuration of a Convertible RI?''

You can change the configuration of your Convertible RI using the EC2 Management Console or the GetReservedInstancesExchangeQuote API. You also have the flexibility to exchange a portion of your Convertible RI or merge the value of multiple Convertible RIs in a single exchange. Click here to learn more about exchanging Convertible RIs.

''Q: Do I need to pay a fee when I exchange my Convertible RIs?''

No, you do not pay a fee when you exchange your RIs. However may need to pay a one-time true-up charge that accounts for differences in pricing between the Convertible RIs that you have and the Convertible RIs that you want.

''Q: How do Convertible RI exchanges work?''

When you exchange one Convertible RI for another, EC2 ensures that the total value of the Convertible RIs is maintained through a conversion. So, if you are converting your RI with a total value of $1000 for another RI, you will receive a quantity of Convertible RIs with a value that’s equal to or greater than $1000. You cannot convert your Convertible RI for Convertible RI(s) of a lesser total value.

''Q: Can you define total value?''

The total value is the sum of all expected payments that you’d make during the term for the RI.

''Q: Can you walk me through how the true-up cost is calculated for a conversion between two All Upfront Convertible RIs?

Sure, let’s say you purchased an All Upfront Convertible RI for $1000 upfront, and halfway through the term you decide to change the attributes of the RI. Since you’re halfway through the RI term, you have $500 left of prorated value remaining on the RI. The All Upfront Convertible RI that you want to convert into costs $1,200 upfront today. Since you only have half of the term left on your existing Convertible RI, there is $600 of value remaining on the desired new Convertible RI. The true-up charge that you’ll pay will be the difference in upfront value between original and desired Convertible RIs, or $100 ($600 - $500).

Q: Can you walk me through a conversion between No Upfront Convertible RIs?''

Unlike conversions between Convertible RIs with an upfront value, since you’re converting between RIs without an upfront cost, there will not be a true-up charge. However, the amount you pay on an hourly basis before the exchange will need to be greater than or equal to the amount you pay on a total hourly basis after the exchange.

For example, let’s say you purchased one No Upfront Convertible RI (A) with a $0.10/hr rate, and you decide to exchange Convertible RI A for another RI (B) that costs $0.06/hr. When you convert, you will receive two RIs of B because the amount that you pay on an hourly basis must be greater than or equal to the amount you’re paying for A on an hourly basis.

''Q: Can I customize the number of instances that I receive as a result of a Convertible RI exchange?''

No, EC2 uses the value of the Convertible RIs you’re trading in to calculate the minimal number of Convertible RIs you’ll receive while ensuring the result of the exchange gives you Convertible RIs of equal or greater value.

''Q: Are there exchange limits for Convertible RIs?''

No, there are no exchange limits for Convertible RIs.

''Q: Do I have the freedom to choose any instance type when I exchange my Convertible RIs?''

No, you can only exchange into Convertible RIs that are currently offered by AWS.

''Q: Can I upgrade the payment option associated with my Convertible RI?''

Yes, you can upgrade the payment option associated with your RI. For example, you can exchange your No Upfront RIs for Partial or All Upfront RIs to benefit from better pricing. You cannot change the payment option from All Upfront to No Upfront, and cannot change from Partial Upfront to No Upfront.

''Q: Do Convertible RIs allow me to benefit from price reductions when they happen?''

Yes, you can exchange your RIs to benefit from lower pricing. For example, if the price of new Convertible RIs reduces by 10%, you can exchange your Convertible RIs and benefit from the 10% reduction in price.

!EC2 Fleet
''Q. What is Amazon EC2 Fleet?''

With a single API call, EC2 Fleet lets you provision compute capacity across different instance types, Availability Zones and across On-Demand, Reserved Instances (RI) and Spot Instances purchase models to help optimize scale, performance and cost.

''Q. If I currently use Amazon EC2 Spot Fleet should I migrate to Amazon EC2 Fleet?''

If you are leveraging Amazon EC2 Spot Instances with Spot Fleet, you can continue to use that. Spot Fleet and EC2 Fleet offer the same functionality. There is no requirement to migrate.

''Q. Can I use Reserved Instance (RI) discounts with Amazon EC2 Fleet?''

Yes, Similar to other EC2 APIs or other AWS services that launches EC2 instances, if the On-Demand instance launched by EC2 Fleet matches an existing RI, that instance will receive the RI discount. For example, if you own Regional RIs for M4 instances and you have specified only M4 instances in your EC2 Fleet, RI discounts will be automatically applied to this usage of M4.

''Q. Will Amazon EC2 Fleet failover to On-Demand if EC2 Spot capacity is not fully fulfilled?''

No, EC2 Fleet will continue to attempt to meet your desired Spot capacity based on the number of Spot instances you requested in your Fleet launch specification.

''Q. What is the pricing for Amazon EC2 Fleet?''

EC2 Fleet comes at no additional charge, you only pay for the underlying resources that EC2 Fleet launches.

''Q. Can you provide a real world example of how I can use Amazon EC2 Fleet?''

There are a number of ways to take advantage of Amazon EC2 Fleet, such as in big data workloads, containerized application, grid processing workloads etc. In this example of a genomic sequencing workload, you can launch a grid of worker nodes with a single API call: select your favorite instances, assign weights for these instances, specify target capacity for On-Demand and Spot Instances, and build a fleet within seconds to crunch through genomic data quickly.

''Q. How can I allocate resources in an Amazon EC2 Fleet?''

By default, EC2 Fleet will launch the On-Demand option that is lowest price. For Spot Instances, EC2 Fleet provides two allocation strategies: lowest price and diversified. The lowest price strategy allows you to provision Spot Instances in pools that provide the lowest price per unit of capacity at the time of the request. The diversified strategy allows you to provision Spot Instances across multiple Spot pools and you can maintain your fleet’s target capacity to increase application.

''Q. Can I submit a multi-region Amazon EC2 Fleet request?''

No, we do not support multi-region EC2 Fleet requests.

''Q. Can I tag an Amazon EC2 Fleet?''

Yes. You can tag a EC2 Fleet request to create business-relevant tag groupings to organize resources along technical, business, and security dimensions.

''Q. Can I modify my Amazon EC2 Fleet?''

Yes, you can modify the total target capacity of your EC2 Fleet when in maintain mode. You may need to cancel the request and submit a new one to change other request configuration parameters.

''Q. Can I specify a different AMI for each instance type that I want to use?''

Yes, simply specify the AMI you’d like to use in each launch specification you provide in your EC2 Fleet.

!On-Demand Capacity Reservation
On-Demand Capacity Reservation is an EC2 offering that lets you create and manage reserved capacity on Amazon EC2. You can create a Capacity Reservation by choosing an Availability Zone and quantity (number of instances) along with other instance specifications such as instance type and tenancy. Once created, the EC2 capacity is held for you regardless of whether you run the instances or not.

''Q. How much do Capacity Reservations cost?''

When the Capacity Reservation is active, you will pay equivalent instance charges whether you run the instances or not. If you do not use the reservation, the charge will show up as unused reservation on your EC2 bill. When you run an instance that matches the attributes of a reservation, you just pay for the instance and nothing for the reservation. There are no upfront or additional charges.

For example, if you create a Capacity Reservation for 20 c5.2xlarge instances and you run 15 c5.2xlarge instances, you will be charged for 15 instances and 5 unused spots in the reservation (effectively charged for 20 instances).

''Q: Can I get a discount for Capacity Reservation usage?''

Yes. Regional RI (RI scoped to a region) discounts apply to Capacity Reservations. AWS Billing automatically applies your RI discount when the attributes of a Capacity Reservation match the attributes of an active Regional RI. When a Capacity Reservation is used by an instance, you are only charged for the instance (with RI discounts applied). Regional RI discounts are preferentially applied to running instances before covering unused Capacity Reservations.

For example, if you have a Regional RI for 50 c5.2xlarge instances and a Capacity Reservation for 50 c5.2xlarge instances in the same region, the RI discount will apply to the unused portion of the reservation. Note that discounts will first apply to any c5 instance usage (across instances sizes and Availability Zones) within that region before applying to unused reservations.

Note: Zonal RIs (RIs scoped to an Availability Zone) do not apply to Capacity Reservations, as Zonal RIs already come with a capacity reservation.

''Q. When should I use RIs and when should I use Capacity Reservations?''

Use Regional RIs for their discount benefit while committing to a one or three year term. Regional RIs automatically apply your discount to usage across Availability Zones and instance sizes, making it easier for you to take advantage of the RI’s discounted rate.

Use Capacity Reservations if you need the additional confidence in your ability to launch instances. Capacity Reservations can be created for any duration and can be managed independently of your RIs.

If you have Regional RI discounts, they will automatically apply to matching Capacity Reservations. This gives you the flexibility to selectively add Capacity Reservations to a portion of your instance footprint and still get the Regional RI discounts for that usage.

''Q. I have a Zonal RI (RI scoped to an Availability Zone) that also provides a capacity reservation? How does this compare with a Capacity Reservation?''

A Zonal RI provides both a discount and a capacity reservation in a specific Availability Zone in return for a 1-to-3 year commitment. Capacity Reservation allows you to create and manage reserved capacity independently of your RI commitment and term length.

A Regional RI can be combined with an On-Demand Capacity Reservation to get, at the minimum, the exact same benefits of a Zonal RI for no additional cost. You also get the enhanced flexibility of Regional RI discounts and the features of Capacity Reservation: the ability to add or subtract from the reservation at any time, view utilization in real-time, and the ability to target a Capacity Reservation for specific workloads.

Re-scoping your Zonal RIs to a region immediately gives you the Availability Zone and instance size flexibility in how RI discounts are applied. You can convert your Standard Zonal RIs to a Regional RI by modifying the scope of the RI from a specific Availability Zone to a region using the EC2 management console or the ModifyReservedInstances API.

''Q. I created a Capacity Reservation. How can I use it?''

A Capacity Reservation is tied to a specific Availability Zone and, by default automatically utilized by running instances in that Availability Zone. When you launch new instances that match the reservation attributes, they will automatically match to the reservation.

You can also target a reservation for specific workloads/instances if you prefer. Refer to Linux or windows technical documentation to learn more about the targeting option.

''Q. How many instances am I allowed to reserve?''

The number of instances you are allowed to reserve is based on your account's On-Demand instance limit. You can reserve as many instances as that limit allows, minus the number of instances that are already running.

If you need a higher limit, contact your AWS sales representative or complete the Amazon EC2 instance request form with your use case and your instance increase will be considered. Limit increases are tied to the region they are requested for.

''Q. Can I modify a Capacity Reservation after it has started?''

Yes. You can reduce the number of instances you reserved at any time. You can also increase the number of instances (subject to availability). You can also modify the end time of your reservation. You cannot modify a Capacity Reservation that has ended or has been deleted.

''Q. Can I end a Capacity Reservation after it has started?''

Yes. You can end a Capacity Reservation by canceling it using the console or API/SDK, or by modifying your reservation to specify an end time that makes it expire automatically. Running instances are unaffected by changes to your Capacity Reservation including deletion or expiration of a reservation.

''Q. Where can I find more information about using Capacity Reservations?''

Refer to Linux or windows technical documentation to learn about creating and using a Capacity Reservation.

Reserved Instances
''Q: What is a Reserved Instance?''

A Reserved Instance (RI) is an EC2 offering that provides you with a significant discount on EC2 usage when you commit to a one-year or three-year term.

''Q: What are the differences between Standard RIs and Convertible RIs?''

Standard RIs offer a significant discount on EC2 instance usage when you commit to a particular instance family. Convertible RIs offer you the option to change your instance configuration during the term, and still receive a discount on your EC2 usage. For more information on Convertible RIs, please click here.

''Q: Do RIs provide a capacity reservation?''

Yes, when a Standard or Convertible RI is scoped to a specific Availability Zone (AZ), instance capacity matching the exact RI configuration is reserved for your use (these are referred to as “zonal RIs”). Zonal RIs give you additional confidence in your ability to launch instances when you need them.

You can also choose to forego the capacity reservation and purchase Standard or Convertible RIs that are scoped to a region (referred to as “regional RIs”). Regional RIs automatically apply the discount to usage across Availability Zones and instance sizes in a region, making it easier for you to take advantage of the RI’s discounted rate.

''Q: When should I purchase a zonal RI?''

If you want to take advantage of the capacity reservation, then you should buy an RI in a specific Availability Zone.

''Q: When should I purchase a regional RI?''

If you do not require the capacity reservation, then you should buy a regional RI. Regional RIs provide AZ and instance size flexibility, which offers broader applicability of the RI’s discounted rate.

''Q: What are Availability Zone and instance size flexibility?''

Availability Zone and instance size flexibility make it easier for you to take advantage of your regional RI’s discounted rate. Availability Zone flexibility applies your RI’s discounted rate to usage in any Availability Zone in a region, while instance size flexibility applies your RI’s discounted rate to usage of any size within an instance family. Let’s say you own an m5.2xlarge Linux/Unix regional RI with default tenancy in US East (N.Virginia). Then this RI’s discounted rate can automatically apply to two m5.xlarge instances in us-east-1a or four m5.large instances in us-east-1b.

''Q: What types of RIs provide instance size flexibility?''

Linux/Unix regional RIs with the default tenancy provide instance size flexibility. Instance size flexibility is not available on RIs of other platforms such as Windows, Windows with SQL Standard, Windows with SQL Server Enterprise, Windows with SQL Server Web, RHEL, and SLES.

''Q: Do I need to take any action to take advantage of Availability Zone and instance size flexibility?''

Regional RIs do not require any action to take advantage of Availability Zone and instance size flexibility.

''Q: I own zonal RIs how do I assign them to a region?''

You can assign your Standard zonal RIs to a region by modifying the scope of the RI from a specific Availability Zone to a region from the EC2 management console or by using the ModifyReservedInstances API.

''Q: How do I purchase an RI?''

To get started, you can purchase an RI from the EC2 Management Console or by using the AWS CLI. Simply specify the instance type, platform, tenancy, term, payment option, and region or Availability Zone.

''Q: Can I purchase an RI for a running instance?''

Yes, AWS will automatically apply an RI’s discounted rate to any applicable instance usage from the time of purchase. Visit the Getting Started page to learn more.

''Q: Can I control which instances are billed at the discounted rate?''

No. AWS automatically optimizes which instances are charged at the discounted rate to ensure you always pay the lowest amount. For information about billing, and how it applies to RIs, see Billing Benefits and Payment Options.

''Q: How does instance size flexibility work?''

EC2 uses the scale shown below, to compare different sizes within an instance family. In the case of instance size flexibility on RIs, this scale is used to apply the discounted rate of RIs to the normalized usage of the instance family. For example, if you have an m5.2xlarge RI that is scoped to a region, then your discounted rate could apply towards the usage of 1 m5.2xlarge or 2 m5.xlarge instances.

Click here to learn more about how instance size flexibility of RIs applies to your EC2 usage. And click here to learn about how instance size flexibility of RIs is presented in the Cost and Usage Report.
Instance Size

Normalization Factor

nano

 0.25

micro	0.5
small	1
medium	2
large	4
xlarge	8
2xlarge	16
4xlarge	32
8xlarge	64
9xlarge	72
10xlarge	80
12xlarge	96
16xlarge	128
18xlarge	144
24xlarge	192
32xlarge	256
''Q: Can I change my RI during its term?''

Yes, you can modify the Availability Zone of the RI, change the scope of the RI from Availability Zone to region (and vice-versa), change the network platform from EC2-VPC to EC2-Classic (and vice versa) or modify instance sizes within the same instance family (on the Linux/Unix platform).

''Q: Can I change the instance type of my RI during its term?''

Yes, Convertible RIs offer you the option to change the instance type, operating system, tenancy or payment option of your RI during its term. Please refer to the Convertible RI section of the FAQ for additional information.

''Q: What are the different payment options for RIs?''

You can choose from three payment options when you purchase an RI. With the All Upfront option, you pay for the entire RI term with one upfront payment. With the Partial Upfront option, you make a low upfront payment and are then charged a discounted hourly rate for the instance for the duration of the RI term. The No Upfront option does not require any upfront payment and provides a discounted hourly rate for the duration of the term.

''Q: When are RIs activated?''

The billing discount and capacity reservation (if applicable) is activated once your payment has successfully been authorized. You can view the status (pending | active | retired) of your RIs on the "Reserved Instances" page of the Amazon EC2 Console.

''Q: Do RIs apply to Spot instances or instances running on a Dedicated Host?''

No, RIs do not apply to Spot instances or instances running on Dedicated Hosts. To lower the cost of using Dedicated Hosts, purchase Dedicated Host Reservations.

''Q: How do RIs work with Consolidated Billing?''

Our system automatically optimizes which instances are charged at the discounted rate to ensure that the consolidated accounts always pay the lowest amount. If you own RIs that apply to an Availability Zone, then only the account which owns the RI will receive the capacity reservation. However, the discount will automatically apply to usage in any account across your consolidated billing family.

''Q: Can I get a discount on RI purchases?''

Yes, EC2 provides tiered discounts on RI purchases. These discounts are determined based on the total list value (non-discounted price) for the active RIs you have per region. Your total list value is the sum of all expected payments for an RI within the term, including both the upfront and recurring hourly payments. The tier ranges and corresponding discounts are shown alongside.

Tier Range of List Value

Discount on Upfront

Discount on Hourly

Less than $500k

0%

0%

$500k-$4M

5%

5%

$4M-$10M	10%	10%
More than $10M	Call Us	 
''Q: Can you help me understand how volume discounts are applied to my RI purchases?''

Sure. Let's assume that you currently have $400,000 worth of active RIs in the US-east-1 region. Now, if you purchase RIs worth $150,000 in the same region, then the first $100,000 of this purchase would not receive a discount. However, the remaining $50,000 of this purchase would be discounted by 5 percent, so you would only be charged $47,500 for this portion of the purchase over the term based on your payment option.

To learn more, please visit the Understanding Reserved Instance Discount Pricing Tier portion of the Amazon EC2 User Guide.

''Q: How do I calculate the list value of an RI?''

Here is a sample list value calculation for three-year Partial Upfront Reserved Instances:

3yr Partial Upfront Volume Discount Value in US-East

 	Upfront $	Recurring Hourly $	Recurring Hourly Value	List Value
m3.xlarge	$ 1,345	$ 0.060	$ 1,577	$ 2,922
c3.xlarge	$ 1,016	$ 0.045	$ 1,183	$ 2,199
''Q: How are volume discounts calculated if I use Consolidated Billing?''

If you leverage Consolidated Billing, AWS will use the aggregate total list price of active RIs across all of your consolidated accounts to determine which volume discount tier to apply. Volume discount tiers are determined at the time of purchase, so you should activate Consolidated Billing prior to purchasing RIs to ensure that you benefit from the largest possible volume discount that your consolidated accounts are eligible to receive.

''Q: Do Convertible RIs qualify for Volume Discounts?''

No, however the value of each Convertible RI that you purchase contributes to your volume discount tier standing.

''Q: How do I determine which volume discount tier applies to me?''

To determine your current volume discount tier, please consult the Understanding Reserved Instance Discount Pricing Tiers portion of the Amazon EC2 User Guide.

''Q: Will the cost of my RIs change, if my future volume qualifies me for other discount tiers?''

No. Volume discounts are determined at the time of purchase, therefore the cost of your RIs will continue to remain the same as you qualify for other discount tiers. Any new purchase will be discounted according to your eligible volume discount tier at the time of purchase.

''Q: Do I need to take any action at the time of purchase to receive volume discounts?''

No, you will automatically receive volume discounts when you use the existing PurchaseReservedInstance API or EC2 Management Console interface to purchase RIs. If you purchase more than $10M worth of RIs contact us about receiving discounts beyond those that are automatically provided.

!Reserved Instance Marketplace
''Q. What is the Reserved Instance Marketplace?''

The Reserved Instance Marketplace is an online marketplace that provides AWS customers the flexibility to sell their Amazon Elastic Compute Cloud (Amazon EC2) Reserved Instances to other businesses and organizations. Customers can also browse the Reserved Instance Marketplace to find an even wider selection of Reserved Instance term lengths and pricing options sold by other AWS customers.

''Q. When can I list a Reserved Instance on the Reserved Instance Marketplace?''

You can list a Reserved Instance when:

You've registered as a seller in the Reserved Instance Marketplace.
You've paid for your Reserved Instance.
You've owned the Reserved Instance for longer than 30 days.
''Q. How will I register as a seller for the Reserved Instance Marketplace?''

To register for the Reserved Instance Marketplace, you can enter the registration workflow by selling a Reserved Instance from the EC2 Management Console or setting up your profile from the "Account Settings" page on the AWS portal. No matter the route, you will need to complete the following steps:

Start by reviewing the overview of the registration process.
Log in to your AWS Account.
Enter in the bank account into which you want us to disburse funds. Once you select "Continue", we will set that bank account as the default disbursement option.
In the confirmation screen, choose "Continue to Console to Start Listing".
If you exceed $20,000 in sales of Reserved Instances, or plan to sell 50 or more Reserved Instances, you will need to provide tax information before you can list your Reserved Instances. Choose "Continue with Tax Interview". During the tax interview pipeline, you will be prompted to enter your company name, contact name, address, and Tax Identification Number using the TIMS workflow.

Additionally, if you plan to sell Reserved Instances worth more than $50,000 per year you will also need to file a limit increase.

''Q. How will I know when I can start selling on the Reserved Instance Marketplace?''

You can start selling on the Reserved Instance Marketplace after you have added a bank account through the registration pipeline. Once activation is complete, you will receive a confirmation email. However, it is important to note that you will not be able to receive disbursements until we are able to receive verification from your bank, which may take up to two weeks, depending on the bank you use.

''Q. How do I list a Reserved Instance for sale?''

To list a Reserved Instance, simply complete these steps in the Amazon EC2 Console:

Select the Reserved Instances you wish to sell, and choose "Sell Reserved Instances". If you have not completed the registration process, you will be prompted to register using the registration pipeline.
For each Reserved Instance type, set the number of instances you’d like to sell, and the price for the one-time fee you would like to set. Note that you can set the one-time price to different amounts depending on the amount of time remaining so that you don’t have to keep adjusting your one-time price if your Reserved Instance doesn’t sell quickly. By default you just need to set the current price and we will automatically decrease the one-time price by the same increment each month.
Once you have configured your listing, a final confirmation screen will appear. Choose "Sell Reserved Instance".
''Q. Which Reserved Instances can I list for sale?''

You can list any Reserved Instances that have been active for at least 30 days, and for which we have received payment. Typically, this means that you can list your reservations once they are in the active state. It is important to note that if you are an invoice customer, your Reserved Instance can be in the active state prior to AWS receiving payment. In this case, your Reserved Instance will not be listed until we have received your payment.

''Q. How are listed Reserved Instances displayed to buyers?''

Reserved Instances (both third-party and those offered by AWS) that have been listed on the Reserved Instance Marketplace can be viewed in the "Reserved Instances" section of the Amazon EC2 Console. You can also use the DescribeReservedInstancesListings API call.

The listed Reserved Instances are grouped based on the type, term remaining, upfront price, and hourly price. This makes it easier for buyers to find the right Reserved Instances to purchase.

''Q. How much of my Reserved Instance term can I list?''

You can sell a Reserved Instance for the term remaining, rounded down to the nearest month. For example, if you had 9 months and 13 days remaining, you will list it for sale as a 9-month-term Reserved Instance.

''Q. Can I remove my Reserved Instance after I’ve listed it for sale?''

Yes, you can remove your Reserved Instance listings at any point until a sale is pending (meaning a buyer has bought your Reserved Instance and confirmation of payment is pending).

''Q. Which pricing dimensions can I set for the Reserved Instances I want to list?''

Using the Reserved Instance Marketplace, you can set an upfront price you’d be willing to accept. You cannot set the hourly price (which will remain the same as was set on the original Reserved Instance), and you will not receive any funds collected from payments associated with the hourly prices.

''Q. Can I still use my reservation while it is listed on the Reserved Instance Marketplace?''

Yes, you will continue to receive the capacity and billing benefit of your reservation until it is sold. Once sold, any running instance that was being charged at the discounted rate will be charged at the On-Demand rate until and unless you purchase a new reservation, or terminate the instance.

''Q. Can I resell a Reserved Instance that I purchased from the Reserved Instance Marketplace?''

Yes, you can resell Reserved Instances purchased from the Reserved Instance Marketplace just like any other Reserved Instance.

''Q. Are there any restrictions when selling Reserved Instances?''

Yes, you must have a US bank account to sell Reserved Instances in the Reserved Instance Marketplace. Support for non-US bank accounts will be coming soon. Also, you may not sell Reserved Instances in the US GovCloud region.

''Q. Can I sell Reserved Instances purchased from the public volume pricing tiers?''

No, this capability is not yet available.

''Q. Is there a charge for selling Reserved Instances on the Reserved Instance Marketplace?''

Yes, AWS charges a service fee of 12% of the total upfront price of each Reserved Instance you sell in the Reserved Instance Marketplace.

''Q. Can AWS sell subsets of my listed Reserved Instances?''

Yes, AWS may potentially sell a subset of the quantity of Reserved Instances that you have listed. For example, if you list 100 Reserved instances, we may only have a buyer interested in purchasing 50 of them. We will sell those 50 instances and continue to list your remaining 50 Reserved Instances until and unless you decide not to list them any longer.

''Q. How do buyers pay for Reserved Instances that they've purchased?''

Payment for completed Reserved Instance sales are done via ACH wire transfers to a US bank account.

''Q. When will I receive my money?''

Once AWS has received funds from the customer that has bought your reservation, we will disburse funds via wire transfer to the bank account you specified when you registered for the Reserved Instance Marketplace.

Then, we will send you an email notification letting you know that we’ve wired you the funds. Typically, funds will appear in your account within 3-5 days of when your Reserved Instance was been sold.

''Q. If I sell my Reserved Instance in the Reserved Instance Marketplace, will I get refunded for the Premium Support I was charged too?''

No, you will not receive a pro-rated refund for the upfront portion of the AWS Premium Support Fee.

''Q. Will I be notified about Reserved Instance Marketplace activities?''

Yes, you will receive a single email once a day that details your Reserved Instance Marketplace activity whenever you create or cancel Reserved Instance listings, buyers purchase your listings, or AWS disburses funds to your bank account.

''Q. What information is exchanged between the buyer and seller to help with the transaction tax calculation?''

The buyer’s city, state, zip+4, and country information will be provided to the seller via a disbursement report. This information will enable sellers to calculate any necessary transaction taxes they need to remit to the government (e.g., sales tax, value-added tax, etc.). The legal entity name of the seller will also be provided on the purchase invoice.

''Q. Are there any restrictions on the customers when purchasing third-party Reserved Instances?''

Yes, you cannot purchase your own listed Reserved Instances, including those in any of your linked accounts (via Consolidated Billing).

''Q. Do I have to pay for Premium Support when purchasing Reserved Instances from the Reserved Instance Marketplace?''

Yes, if you are a Premium Support customer, you will be charged for Premium Support when you purchase a Reserved Instance through the Reserved Instance Marketplace.

!Spot instances
''Q. What is a Spot instance?''

Spot instances are spare EC2 capacity that can save you up 90% off of On-Demand prices that AWS can interrupt with a 2-minute notification. Spot uses the same underlying EC2 instances as On-Demand and Reserved Instances, and is best suited for fault-tolerant, flexible workloads. Spot instances provides an additional option for obtaining compute capacity and can be used along with On-Demand and Reserved Instances.

''Q. How is a Spot instance different than an On-Demand instance or Reserved Instance?''

While running, Spot instances are exactly the same as On-Demand or Reserved instances. The main differences are that Spot instances typically offer a significant discount off the On-Demand prices, your instances can be interrupted by Amazon EC2 for capacity requirements with a 2-minute notification, and Spot prices adjust gradually based on long term supply and demand for spare EC2 capacity.

See here for more details on Spot instances.

''Q. How do I purchase and start up a Spot instance?''

Spot instances can be launched using the same tools you use launch instances today, including AWS Management Console, Auto-Scaling Groups, Run Instances and Spot Fleet. In addition many AWS services support launching Spot instances such as EMR, ECS, Datapipeline, Cloudformation and Batch.

To start up a Spot instance, you simply need to choose a Launch Template and the number of instances you would like to request.

See here for more details on how to request Spot instances.

''Q. How many Spot instances can I request?''

You can request Spot instances up to your Spot limit for each region. Note that customers new to AWS might start with a lower limit. To learn more about Spot instance limits, please refer to the Amazon EC2 User Guide.

If you would like a higher limit, complete the Amazon EC2 instance request form with your use case and your instance increase will be considered. Limit increases are tied to the region they were requested for.

''Q. What price will I pay for a Spot instance?''

You pay the Spot price that’s in effect at the beginning of each instance-hour for your running instance. If Spot price changes after you launch the instance, the new price is charged against the instance usage for the subsequent hour.

''Q. What is a Spot capacity pool?''

A Spot capacity pool is a set of unused EC2 instances with the same instance type, operating system, Availability Zone, and network platform (EC2-Classic or EC2-VPC). Each spot capacity pool can have a different price based on supply and demand.

''Q. What are the best practices to use Spot instances?''

We highly recommend using multiple Spot capacity pools to maximize the amount of Spot capacity available to you. EC2 provides built-in automation to find the most cost-effective capacity across multiple Spot capacity pools using Spot Fleet. For more information, please see Spot Best Practices.

''Q. How can I determine the status of my Spot request?''

You can determine the status of your Spot request via Spot Request Status code and message. You can access Spot Request Status information on the Spot Instance page of the EC2 console of the AWS Management Console, API and CLI. For more information, please visit the Amazon EC2 Developer guide.

''Q. Are Spot instances available for all instance families and sizes and in all regions?''

Spot instances are available in all public AWS regions. Spot is available for nearly all EC2 instance families and sizes, including the newest compute-optimized instances, accelerated graphics, and FPGA instance types. A full list of instance types supported in each region are listed here.

''Q. Which operating systems are available as Spot instances?''

Linux/UNIX and Windows Server are available. Windows Server with SQL Server is not currently available.

''Q. Can I use a Spot instance with a paid AMI for third-party software (such as IBM’s software packages)?''

Not at this time.

''Q. When would my Spot instance get interrupted?''

Over the last 3 months, 92% of Spot instance interruptions were from a customer manually terminating the instance because the application had completed its work.

In the circumstance EC2 needs to reclaim your Spot instance it can be for two possible reasons, with the primary one being Amazon EC2 capacity requirements (e.g. On Demand or Reserved Instance usage). Secondarily, if you have chosen to set a “maximum Spot price” and the Spot price rises above this, your instance will be reclaimed with a two-minute notification. This parameter determines the maximum price you would be willing to pay for a Spot instance hour, and by default, is set at the On-Demand price. As before, you continue to pay the Spot market price, not your maximum price, at the time your instance was running, charged in per-second increments.

''Q. What happens to my Spot instance when it gets interrupted?''

You can choose to have your Spot instances terminated, stopped or hibernated upon interruption. Stop and hibernate options are available for persistent Spot requests and Spot Fleets with the “maintain” option enabled. By default, your instances are terminated.

Refer to Spot Hibernation to learn more about handling interruptions.

''Q. What is the difference between Stop and Hibernate interruption behaviors?''

In the case of Hibernate, your instance gets hibernated and the RAM data persisted. In the case of Stop, your instance gets shutdown and RAM is cleared.

In both the cases, data from your EBS root volume and any attached EBS data volumes is persisted. Your private IP address remains the same, as does your elastic IP address (if applicable). The network layer behavior will be similar to that of EC2 Stop-Start workflow. Stop and Hibernate are available for Amazon EBS backed instances only. Local instance storage is not persisted.

''Q. What if my EBS root volume is not large enough to store memory state (RAM) for Hibernate?''

You should have sufficient space available on your EBS root volume to write data from memory. If the EBS root volume does not enough space, hibernation will fail and the instance will get shutdown instead. Ensure that your EBS volume is large enough to persist memory data before choosing the hibernate option.

''Q. What is the benefit if Spot hibernates my instance on interruption?''

With hibernate, Spot instances will pause and resume around any interruptions so your workloads can pick up from exactly where they left off. You can use hibernation when your instance(s) need to retain instance state across shutdown-startup cycles, i.e. when your applications running on Spot depend on contextual, business, or session data stored in RAM.

''Q. What do I need to do to enable hibernation for my Spot instances?''

Refer to Spot Hibernation to learn about enabling hibernation for your Spot instances.

''Q. Do I have to pay for hibernating my Spot instance?''

There is no additional charge for hibernating your instance beyond the EBS storage costs and any other EC2 resources you may be using. You are not charged instance usage fees once your instance is hibernated.

''Q. Can I restart a stopped instance or resume a hibernated instance?''

No, you will not be able to re-start a stopped instance or resume a hibernated instance directly. Stop-start and hibernate-resume cycles are controlled by Amazon EC2. If an instance is stopped or hibernated by Spot, it will be restarted or resumed by Amazon EC2 when the capacity becomes available.

''Q. Which instances and operating systems support hibernation?''

Spot Hibernation is currently supported for Amazon Linux AMIs, Ubuntu and Microsoft Windows operating systems running on any instance type across C3, C4, C5, M4, M5, R3, R4 instances with memory (RAM) size less than 100 GiB.

To review the list of supported OS versions, refer to Spot Hibernation.

''Q. How will I be charged if my Spot instance is interrupted?''

If your Spot instance is terminated or stopped by Amazon EC2 in the first instance hour, you will not be charged for that usage. However, if you terminate the instance yourself, you will be charged to the nearest second. If the Spot instance is terminated or stopped by Amazon EC2 in any subsequent hour, you will be charged for your usage to the nearest second. If you are running on Windows and you terminate the instance yourself, you will be charged for an entire hour.

''Q. How am I charged if Spot price changes while my instance is running?''

You will pay the price per instance-hour set at the beginning of each instance-hour for the entire hour, billed to the nearest second.

''Q. Where can I see my usage history for Spot instances and see how much I was billed?''

The AWS Management Console makes a detailed billing report available which shows Spot instance start and termination/stop times for all instances. Customers can check the billing report against historical Spot prices via the API to verify that the Spot price they were billed is correct.

''Q: Are Spot blocks (Fixed Duration Spot instances) ever interrupted?''

Spot blocks are designed not to be interrupted and will run continuously for the duration you select, independent of Spot market price. In rare situations, Spot blocks may be interrupted due to AWS capacity needs. In these cases, we will provide a two-minute warning before we terminate your instance (termination notice), and you will not be charged for the affected instance(s).

''Q. What is a Spot fleet?''

A Spot Fleet allows you to automatically request and manage multiple Spot instances that provide the lowest price per unit of capacity for your cluster or application, like a batch processing job, a Hadoop workflow, or an HPC grid computing job. You can include the instance types that your application can use. You define a target capacity based on your application needs (in units including instances, vCPUs, memory, storage, or network throughput) and update the target capacity after the fleet is launched. Spot fleets enable you to launch and maintain the target capacity, and to automatically request resources to replace any that are disrupted or manually terminated. Learn more about Spot fleets.

''Q. Is there any additional charge for making Spot Fleet requests''

No, there is no additional charge for Spot Fleet requests.

''Q. What limits apply to a Spot Fleet request?''

Visit the Spot Fleet Limits section of the Amazon EC2 User Guide to learn about the limits that apply to your Spot Fleet request.

''Q. What happens if my Spot Fleet request tries to launch Spot instances but exceeds my regional Spot request limit?''

If your Spot Fleet request exceeds your regional Spot instance request limit, individual Spot instance requests will fail with a Spot request limit exceeded request status. Your Spot Fleet request’s history will show any Spot request limit errors that the Fleet request received. Visit the Monitoring Your Spot Fleet section of the Amazon EC2 User Guide to learn how to describe your Spot Fleet request's history.

''Q. Are Spot fleet requests guaranteed to be fulfilled?''

No. Spot fleet requests allow you to place multiple Spot instance requests simultaneously, and are subject to the same availability and prices as a single Spot instance request. For example, if no resources are available for the instance types listed in your Spot Fleet request, we may be unable to fulfill your request partially or in full. We recommend you to include all the possible instance types and availability zones that are suitable for your workloads in the Spot Fleet.

''Q. Can I submit a multi-Availability Zone Spot Fleet request?''

Yes, visit the Spot Fleet Examples section of the Amazon EC2 User Guide to learn how to submit a multi-Availability Zone Spot Fleet request.

''Q. Can I submit a multi-region Spot Fleet request?''

No, we do not support multi-region Fleet requests.

''Q. How does Spot Fleet allocate resources across the various Spot instance pools specified in the launch specifications?''

The RequestSpotFleet API provides two allocation strategies: lowestPrice and diversified. The lowestPrice strategy allows you to provision your Spot Fleet resources in instance pools that provide the lowest price per unit of capacity at the time of the request. The diversified strategy allows you to provision your Spot Fleet resources across multiple Spot instance pools. This enables you to maintain your fleet’s target capacity and increase your application’s availability as Spot capacity fluctuates.

Running your application’s resources across diverse Spot instance pools also allows you to further reduce your fleet’s operating costs over time. Visit the Amazon EC2 User Guide to learn more.

''Q. Can I tag a Spot Fleet request?''

You can request to launch Spot instances with tags via Spot Fleet. The Fleet by itself cannot be tagged.

''Q. How can I see which Spot fleet owns my Spot instances?''

You can identify the Spot instances associated with your Spot Fleet by describing your fleet request. Fleet requests are available for 48 hours after all its Spot instances have been terminated. See the Amazon EC2 User Guide to learn how to describe your Spot Fleet request.

''Q. Can I modify my Spot Fleet request?''

Yes, you can modify the target capacity of your Spot Fleet request. You may need to cancel the request and submit a new one to change other request configuration parameters.

''Q. Can I specify a different AMI for each instance type that I want to use?''

Yes, simply specify the AMI you’d like to use in each launch specification you provide in your Spot Fleet request.

''Q. Can I use Spot Fleet with Elastic Load Balancing, Auto Scaling, or Elastic MapReduce?''

You can use Auto Scaling features with Spot Fleet such as target tracking, health checks, cloudwatch metrics etc and can attach instances to your Elastic load balancers (both classic and application load balancers). Elastic MapReduce has a feature named “Instance fleets” that provides capabilities similar to Spot Fleet.

''Q. Does a Spot Fleet request terminate Spot instances when they are no longer running in the lowest priced Spot pools and relaunch them in the lowest priced pools?''

No, Spot Fleet requests do not automatically terminate and re-launch instances while they are running. However, if you terminate a Spot instance, Spot Fleet will replenish it with a new Spot instance in the new lowest priced pool.

''Q: Can I use stop or Hibernation interruption behaviors with Spot Fleet?''

Yes, stop-start and hibernate-resume are supported with Spot Fleet with “maintain” fleet option enabled. 

!Platform
Amazon Time Sync Service | Availability zones | Cluster instances | Hardware information | Micro instances | Nitro Hypervisor | Optimize CPUs

!!Amazon Time Sync Service
''Q. How do I use this service?''

The service provides an NTP endpoint at a link-local IP address (169.254.169.123) accessible from any instance running in a VPC. Instructions for configuring NTP clients are available for Linux and Windows.

''Q. What are the key benefits of using this service?''

A consistent and accurate reference time source is crucial for many applications and services. The Amazon Time Sync Service provides a time reference that can be securely accessed from an instance without requiring VPC configuration changes and updates. It is built on Amazon’s proven network infrastructure and uses redundant reference time sources to ensure high accuracy and availability.

''Q. Which instance types are supported for this service?''

All instances running in a VPC can access the service.

!Availability zones
''Q: How isolated are Availability Zones from one another?''

Each Availability Zone runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. Common points of failures like generators and cooling equipment are not shared across Availability Zones. Additionally, they are physically separate, such that even extremely uncommon disasters such as fires, tornados or flooding would only affect a single Availability Zone.

''Q: Is Amazon EC2 running in more than one region?''

Yes. Please refer to Regional Products and Services for more details of our product and service availability by region.

''Q: How can I make sure that I am in the same Availability Zone as another developer?''

We do not currently support the ability to coordinate launches into the same Availability Zone across AWS developer accounts. One Availability Zone name (for example, us-east-1a) in two AWS customer accounts may relate to different physical Availability Zones.

''Q: If I transfer data between Availability Zones using public IP addresses, will I be charged twice for Regional Data Transfer (once because it’s across zones, and a second time because I’m using public IP addresses)?''

No. Regional Data Transfer rates apply if at least one of the following is true, but is only charged once for a given instance even if both are true:

The other instance is in a different Availability Zone, regardless of which type of address is used.
Public or Elastic IP addresses are used, regardless of which Availability Zone the other instance is in.

!Cluster instances
''Q. What is a Cluster Compute Instance?''

Cluster Compute Instances combine high compute resources with a high performance networking for High Performance Compute (HPC) applications and other demanding network-bound applications. Cluster Compute Instances provide similar functionality to other Amazon EC2 instances but have been specifically engineered to provide high performance networking.

Amazon EC2 cluster placement group functionality allows users to group Cluster Compute Instances in clusters – allowing applications to get the low-latency network performance necessary for tightly-coupled node-to-node communication typical of many HPC applications. Cluster Compute Instances also provide significantly increased network throughput both within the Amazon EC2 environment and to the Internet. As a result, these instances are also well suited for customer applications that need to perform network-intensive operations.

Learn more about use of this instance type for HPC applications.

''Q. What kind of network performance can I expect when I launch instances in cluster placement group?'''

The bandwidth an EC2 instance can utilize in a cluster placement group depends on the instance type and its networking performance specification. Inter-instance traffic within the same region can utilize 5 Gbps for single-flow and up to 25 Gbps for multi-flow traffic. When launched in a placement group, select EC2 instances can utilize up to 10 Gbps for single-flow traffic.
''Q. What is a Cluster GPU Instance?''

Cluster GPU Instances provide general-purpose graphics processing units (GPUs) with proportionally high CPU and increased network performance for applications benefiting from highly parallelized processing that can be accelerated by GPUs using the CUDA and OpenCL programming models. Common applications include modeling and simulation, rendering and media processing.

Cluster GPU Instances give customers with HPC workloads an option beyond Cluster Compute Instances to further customize their high performance clusters in the cloud for applications that can benefit from the parallel computing power of GPUs.

Cluster GPU Instances use the same cluster placement group functionality as Cluster Compute Instances for grouping instances into clusters – allowing applications to get the low-latency, high bandwidth network performance required for tightly-coupled node-to-node communication typical of many HPC applications.

Learn more about HPC on AWS.

''Q. What is a High Memory Cluster Instance?''

High Memory Cluster Instances provide customers with large amounts of memory and CPU capabilities per instance in addition to high network capabilities. These instance types are ideal for memory intensive workloads including in-memory analytics systems, graph analysis and many science and engineering applications

High Memory Cluster Instances use the same cluster placement group functionality as Cluster Compute Instances for grouping instances into clusters – allowing applications to get the low-latency, high bandwidth network performance required for tightly-coupled node-to-node communication typical of many HPC and other network intensive applications.

''Q. Does use of Cluster Compute and Cluster GPU Instances differ from other Amazon EC2 instance types?''

Cluster Compute and Cluster GPU Instances use differs from other Amazon EC2 instance types in two ways.

First, Cluster Compute and Cluster GPU Instances use Hardware Virtual Machine (HVM) based virtualization and run only Amazon Machine Images (AMIs) based on HVM virtualization. Paravirtual Machine (PVM) based AMIs used with other Amazon EC2 instance types cannot be used with Cluster Compute or Cluster GPU Instances.

Second, in order to fully benefit from the available low latency, full bisection bandwidth between instances, Cluster Compute and Cluster GPU Instances must be launched into a cluster placement group through the Amazon EC2 API or AWS Management Console.

''Q. What is a cluster placement group?''

A cluster placement group is a logical entity that enables creating a cluster of instances by launching instances as part of a group. The cluster of instances then provides low latency connectivity between instances in the group. Cluster placement groups are created through the Amazon EC2 API or AWS Management Console.

''Q. Are all features of Amazon EC2 available for Cluster Compute and Cluster GPU Instances?''

Currently, Amazon DevPay is not available for Cluster Compute or Cluster GPU Instances.

''Q. Is there a limit on the number of Cluster Compute or Cluster GPU Instances I can use and/or the size of cluster I can create by launching Cluster Compute Instances or Cluster GPU into a cluster placement group?''

There is no limit specific for Cluster Compute Instances. For Cluster GPU Instances, you can launch 2 Instances on your own. If you need more capacity, please complete the Amazon EC2 instance request form (selecting the appropriate primary instance type).

''Q. Are there any ways to optimize the likelihood that I receive the full number of instances I request for my cluster via a cluster placement group?''

We recommend that you launch the minimum number of instances required to participate in a cluster in a single launch. For very large clusters, you should launch multiple placement groups, e.g. two placement groups of 128 instances, and combine them to create a larger, 256 instance cluster.

''Q. Can Cluster GPU and Cluster Compute Instances be launched into a single cluster placement group?''

While it may be possible to launch different cluster instance types into a single placement group, at this time we only support homogenous placement groups.

''Q. If an instance in a cluster placement group is stopped then started again, will it maintain its presence in the cluster placement group?''

Yes. A stopped instance will be started as part of the cluster placement group it was in when it stopped. If capacity is not available for it to start within its cluster placement group, the start will fail.

!Hardware information
''Q: What kind of hardware will my application stack run on?''

Visit Amazon EC2 Instance Type for a list of EC2 instances available by region.

''Q: How does EC2 perform maintenance?''

AWS regularly performs routine hardware, power and network maintenance without disrupting customer instances. To achieve this we employ a combination of tools and methods across the entire AWS Global infrastructure, such as redundant and concurrently maintainable systems, as well as live system updates and migration. For example, in these cases - Example 1, Example 2 - EC2 used live system updates to perform the required security maintenance non-disruptively for over 90% of EC2 Instances, with each maintenance completing in less than two seconds. AWS continuously invests in technology and processes to complete routine maintenance ever more safely and quickly, often with no disruption to customer instances.
''Q: How do I select the right instance type?''

Amazon EC2 instances are grouped into 5 families: General Purpose, Compute Optimized, Memory Optimized, Storage Optimized and Accelerated Computing instances. General Purpose Instances have memory to CPU ratios suitable for most general purpose applications and come with fixed performance (M5, M4) or burstable performance (T2); Compute Optimized instances (C5, C4) have proportionally more CPU resources than memory (RAM) and are well suited for scale out compute-intensive applications and High Performance Computing (HPC) workloads; Memory Optimized Instances (X1e, X1, R4) offer larger memory sizes for memory-intensive applications, including database and memory caching applications; Accelerating Computing instances (P3, P2, G3, F1) take advantage of the parallel processing capabilities of NVIDIA Tesla GPUs for high performance computing and machine/deep learning; GPU Graphics instances (G3) offer high-performance 3D graphics capabilities for applications using OpenGL and DirectX; F1 instances deliver Xilinx FPGA-based reconfigurable computing; Storage Optimized Instances (H1, I3, I3en, D2) that provide very high, low latency, I/O capacity using SSD-based local instance storage for I/O-intensive applications, with D2 or H1, the dense-storage and HDD-storage instances, provide local high storage density and sequential I/O performance for data warehousing, Hadoop and other data-intensive applications. When choosing instance types, you should consider the characteristics of your application with regards to resource utilization (i.e. CPU, Memory, Storage) and select the optimal instance family and instance size.

''Q: What is an “EC2 Compute Unit” and why did you introduce it?''

Transitioning to a utility computing model fundamentally changes how developers have been trained to think about CPU resources. Instead of purchasing or leasing a particular processor to use for several months or years, you are renting capacity by the hour. Because Amazon EC2 is built on commodity hardware, over time there may be several different types of physical hardware underlying EC2 instances. Our goal is to provide a consistent amount of CPU capacity no matter what the actual underlying hardware.

Amazon EC2 uses a variety of measures to provide each instance with a consistent and predictable amount of CPU capacity. In order to make it easy for developers to compare CPU capacity between different instance types, we have defined an Amazon EC2 Compute Unit. The amount of CPU that is allocated to a particular instance is expressed in terms of these EC2 Compute Units. We use several benchmarks and tests to manage the consistency and predictability of the performance from an EC2 Compute Unit. The EC2 Compute Unit (ECU) provides the relative measure of the integer processing power of an Amazon EC2 instance. Over time, we may add or substitute measures that go into the definition of an EC2 Compute Unit, if we find metrics that will give you a clearer picture of compute capacity.

''Q: How does EC2 ensure consistent performance of instance types over time?''

AWS conducts yearly performance benchmarking of Linux and Windows compute performance on EC2 instance types. Benchmarking results, a test suite that customers can use to conduct independent testing, and guidance on expected performance variance is available under NDA for M,C,R, T and z1d instances; please contact your sales representative to request them.

''Q: What is the regional availability of Amazon EC2 instance types?''

For a list of all instances and regional availability, visit Amazon EC2 Pricing.

!Micro instances
''Q. How much compute power do Micro instances provide?''

Micro instances provide a small amount of consistent CPU resources and allow you to burst CPU capacity up to 2 ECUs when additional cycles are available. They are well suited for lower throughput applications and web sites that consume significant compute cycles periodically but very little CPU at other times for background processes, daemons, etc. Learn more about use of this instance type.

''Q. How does a Micro instance compare in compute power to a Standard Small instance?''

At steady state, Micro instances receive a fraction of the compute resources that Small instances do. Therefore, if your application has compute-intensive or steady state needs we recommend using a Small instance (or larger, depending on your needs). However, Micro instances can periodically burst up to 2 ECUs (for short periods of time). This is double the number of ECUs available from a Standard Small instance. Therefore, if you have a relatively low throughput application or web site with an occasional need to consume significant compute cycles, we recommend using Micro instances.

''Q. How can I tell if an application needs more CPU resources than a Micro instance is providing?''

The CloudWatch metric for CPU utilization will report 100% utilization if the instance bursts so much that it exceeds its available CPU resources during that CloudWatch monitored minute. CloudWatch reporting 100% CPU utilization is your signal that you should consider scaling – manually or via Auto Scaling – up to a larger instance type or scale out to multiple Micro instances.

''Q. Are all features of Amazon EC2 available for Micro instances?''

Currently Amazon DevPay is not available for Micro instances.

!Nitro Hypervisor
''Q. What is the Nitro Hypervisor?''

The launch of C5 instances introduced a new hypervisor for Amazon EC2, the Nitro Hypervisor. As a component of the Nitro system, the Nitro Hypervisor primarily provides CPU and memory isolation for EC2 instances. VPC networking and EBS storage resources are implemented by dedicated hardware components, Nitro Cards that are part of all current generation EC2 instance families. The Nitro Hypervisor is built on core Linux Kernel-based Virtual Machine (KVM) technology, but does not include general-purpose operating system components.

''Q. How does the Nitro Hypervisor benefit customers?''

The Nitro Hypervisor provides consistent performance and increased compute and memory resources for EC2 virtualized instances by removing host system software components. It allows AWS to offer larger instance sizes (like c5.18xlarge) that provide practically all of the resources from the server to customers. Previously, C3 and C4 instances each eliminated software components by moving VPC and EBS functionality to hardware designed and built by AWS. This hardware enables the Nitro Hypervisor to be very small and uninvolved in data processing tasks for networking and storage.

''Q. Will all EC2 instances use the Nitro Hypervisor?''

Eventually all new instance types will use the Nitro Hypervisor, but in the near term, some new instance types will use Xen depending on the requirements of the platform.

''Q. Will AWS continue to invest in its Xen-based hypervisor?''

Yes. As AWS expands its global cloud infrastructure, EC2’s use of its Xen-based hypervisor will also continue to grow. Xen will remain a core component of EC2 instances for the foreseeable future. AWS is a founding member of the Xen Project since its establishment as a Linux Foundation Collaborative Project and remains an active participant on its Advisory Board. As AWS expands its global cloud infrastructure, EC2’s Xen-based hypervisor also continues to grow. Therefore EC2’s investment in Xen continues to grow, not shrink

''Q. How many EBS volumes and Elastic Network Interfaces (ENIs) can be attached to instances running on the Nitro Hypervisor?''

Instances running on the Nitro Hypervisor support a maximum of 27 additional PCI devices for EBS volumes and VPC ENIs. Each EBS volume or VPC ENI uses a PCI device. For example, if you attach 3 additional network interfaces to an instance that uses the Nitro Hypervisor, you can attach up to 24 EBS volumes to that instance.

''Q. Will the Nitro Hypervisor change the APIs used to interact with EC2 instances?''

No, all the public facing APIs for interacting with EC2 instances that run using the Nitro Hypervisor will remain the same. For example, the “hypervisor” field of the DescribeInstances response, which will continue to report “xen” for all EC2 instances, even those running under the Nitro Hypervisor. This field may be removed in a future revision of the EC2 API.

''Q. Which AMIs are supported on instances that use the Nitro Hypervisor?''

EBS backed HVM AMIs with support for ENA networking and booting from NVMe storage can be used with instances that run under the Nitro Hypervisor. The latest Amazon Linux AMI and Windows AMIs provided by Amazon are supported, as are the latest AMI of Ubuntu, Debian, Red Hat Enterprise Linux, SUSE Enterprise Linux, CentOS, and FreeBSD.

''Q. Will I notice any difference between instances using Xen hypervisor and those using the Nitro Hypervisor?''

Yes. For example, instances running under the Nitro Hypervisor boot from EBS volumes using an NVMe interface. Instances running under Xen boot from an emulated IDE hard drive, and switch to the Xen paravirtualized block device drivers.

Operating systems can identify when they are running under a hypervisor. Some software assumes that EC2 instances will run under the Xen hypervisor and rely on this detection. Operating systems will detect they are running under KVM when an instance uses the Nitro Hypervisor, so the process to identify EC2 instances should be used to identify EC2 instances that run under both hypervisors.

All the features of EC2 such as Instance Metadata Service work the same way on instances running under both Xen and the Nitro Hypervisor. The majority of applications will function the same way under both Xen and the Nitro Hypervisor as long as the operating system has the needed support for ENA networking and NVMe storage.

''Q. How are instance reboot and termination EC2 API requests implemented by the Nitro Hypervisor?''

The Nitro Hypervisor signals the operating system running in the instance that it should shut down cleanly by industry standard ACPI methods. For Linux instances, this requires that acpid be installed and functioning correctly. If acpid is not functioning in the instance, termination events will be delayed by multiple minutes and will then execute as a hard reset or power off.

''Q. How do EBS volumes behave when accessed by NVMe interfaces?''

There are some important differences in how operating system NVMe drivers behave compared to Xen paravirtual (PV) block drivers.

First, the NVMe device names used by Linux based operating systems will be different than the parameters for EBS volume attachment requests and block device mapping entries such as /dev/xvda and /dev/xvdf. NVMe devices are enumerated by the operating system as /dev/nvme0n1, /dev/nvme1n1, and so on. The NVMe device names are not persistent mappings to volumes, therefore other methods like file system UUIDs or labels should be used when configuring the automatic mounting of file systems or other startup activities. When EBS volumes are accessed via the NVMe interface, the EBS volume ID is available via the controller serial number and the device name specified in EC2 API requests is provided by an NVMe vendor extension to the Identify Controller command. This enables backward compatible symbolic links to be created by a utility script. For more information see the EC2 documentation on device naming and NVMe based EBS volumes.

Second, by default the NVMe drivers included in most operating systems implement an I/O timeout. If an I/O does not complete in an implementation specific amount of time, usually tens of seconds, the driver will attempt to cancel the I/O, retry it, or return an error to the component that issued the I/O. The Xen PV block device interface does not time out I/O, which can result in processes that cannot be terminated if it is waiting for I/O. The Linux NVMe driver behavior can be modified by specifying a higher value for the nvme.io timeout kernel module parameter.

Third, the NVMe interface can transfer much larger amounts of data per I/O, and in some cases may be able to support more outstanding I/O requests, compared to the Xen PV block interface. This can cause higher I/O latency if very large I/Os or a large number of I/O requests are issued to volumes designed to support throughput workloads like EBS Throughput Optimized HDD (st1) and Cold HDD (sc1) volumes. This I/O latency is normal for throughput optimized volumes in these scenarios, but may cause I/O timeouts in NVMe drivers. The I/O timeout can be adjusted in the Linux driver by specifying a larger value for the nvme_core.io_timeout kernel module parameter.

!Optimize CPUs
''Q: What is Optimize CPUs?''

Optimize CPUs gives you greater control of your EC2 instances on two fronts. First, you can specify a custom number of vCPUs when launching new instances to save on vCPU-based licensing costs. Second, you can disable Intel Hyper-Threading Technology (Intel HT Technology) for workloads that perform well with single-threaded CPUs, such as certain high-performance computing (HPC) applications.

''Q: Why should I use Optimize CPUs feature?''

You should use Optimize CPUs if:

You are running EC2 workloads that are not compute bound and are incurring vCPU-based licensing costs. By launching instances with custom number of vCPUs you may be able to optimize your licensing spend.
You are running workloads that will benefit from disabling hyper-threading on EC2 instances.
''Q: How will the CPU optimized instances be priced?''

CPU optimized instances will be priced the same as equivalent full-sized instance.

''Q: How will my application performance change when using Optimize CPUs on EC2?''

Your application performance change with Optimize CPUs will be largely dependent on the workloads you are running on EC2. We encourage you to benchmark your application performance with Optimize CPUs to arrive at the right number of vCPUs and optimal hyper-threading behavior for your application.

''Q: Can I use Optimize CPUs on EC2 Bare Metal instance types (such as i3.metal)?''

No. You can use Optimize CPUs with only virtualized EC2 instances.

''Q. How can I get started with using Optimize CPUs for EC2 Instances?''

For more information on how to get started with Optimize CPUs and supported instance types, please visit the Optimize CPUs documentation page here.

!Workloads
Amazon EC2 running IBM | Amazon EC2 running Microsoft Windows and other third-party software

Amazon EC2 running IBM
''Q. How am I billed for my use of Amazon EC2 running IBM?''

You pay only for what you use and there is no minimum fee. Pricing is per instance-hour consumed for each instance type. Partial instance-hours consumed are billed as full hours. Data transfer for Amazon EC2 running IBM is billed and tiered separately from Amazon EC2. There is no Data Transfer charge between two Amazon Web Services within the same region (i.e. between Amazon EC2 US West and another AWS service in the US West). Data transferred between AWS services in different regions will be charged as Internet Data Transfer on both sides of the transfer.

For Amazon EC2 running IBM pricing information, please visit the pricing section on the Amazon EC2 running IBM detail page.

''Q. Can I use Amazon DevPay with Amazon EC2 running IBM?''

No, you cannot use DevPay to bundle products on top of Amazon EC2 running IBM at this time.

!Amazon EC2 running Microsoft Windows and other third-party software
''Q. Can I use my existing Windows Server license with EC2?''

Yes you can. After you’ve imported your own Windows Server machine images using the ImportImage tool, you can launch instances from these machine images on EC2 Dedicated Hosts and effectively manage instances and report usage. Microsoft typically requires that you track usage of your licenses against physical resources such as sockets and cores and Dedicated Hosts helps you to do this. Visit the Dedicated Hosts detail page for more information on how to use your own Windows Server licenses on Amazon EC2 Dedicated Hosts.

''Q. What software licenses can I bring to the Windows environment?''

Specific software license terms vary from vendor to vendor. Therefore, we recommend that you check the licensing terms of your software vendor to determine if your existing licenses are authorized for use in Amazon EC2.
!Amazon EC2 Instance Types
Amazon EC2 provides a wide selection of instance types optimized to fit different use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give you the flexibility to choose the appropriate mix of resources for your applications. Each instance type includes one or more instance sizes, allowing you to scale your resources to the requirements of your target workload.

!General Purpose
[[A1|A1]]      [[T3|T3]]      [[T3a|T3a]]     [[T2|T2]]      [[M5|M5]]      [[M5a|M5a]]     [[M4|M4]]

!Compute Optimized
[[C5|C5]]         [[C5n|C5n]]              [[C4|c4]]



!Memory Optimized
[[R5|R5]]      [[R5a|R5a]]          [[R4|R4]]            [[X1e|X1e]]       [[X1|X1]]             [[High Memory |HM]]               [[z1d|z1d]]

!Accelerated Computing
[[P3|p3]]       [[P2|P2]]       [[G3|G3]]         [[F1|f1]]


!Storage Optimized
[[I3|I3]]        [[I3en|I3en]]           [[D2|d2]]              [[H1|H1]]

![[Instance Features|IF]]
G3 instances are optimized for graphics-intensive applications.

Features:

High frequency Intel Xeon E5-2686 v4 (Broadwell) processors
NVIDIA Tesla M60 GPUs, each with 2048 parallel processing cores and 8 GiB of video memory
Enables NVIDIA GRID Virtual Workstation features, including support for 4 monitors with resolutions up to 4096x2160. Each GPU included in your instance is licensed for one “Concurrent Connected User"
Enables NVIDIA GRID Virtual Application capabilities for application virtualization software like Citrix XenApp Essentials and VMware Horizon, supporting up to 25 concurrent users per GPU
Each GPU features an on-board hardware video encoder designed to support up to 10 H.265 (HEVC) 1080p30 streams and up to 18 H.264 1080p30 streams, enabling low-latency frame capture and encoding, and high-quality interactive streaming experiences
Enhanced Networking using the Elastic Network Adapter (ENA) with 25 Gbps of aggregate network bandwidth within a Placement Group
Model	GPUs	vCPU	Mem (GiB)	GPU Memory (GiB)	Network Performance
g3s.xlarge	1	4	30.5	8	Up to 10 Gigabit
g3.4xlarge	1	16	122	8	Up to 10 Gigabit
g3.8xlarge	2	32	244	16	10 Gigabit
g3.16xlarge	4	64	488	32	25 Gigabit
All instances have the following specs:

2.3 GHz (base) and 2.7 GHz (turbo) Intel Xeon E5-2686 v4 Processor
Intel AVX, Intel AVX2, Intel Turbo
EBS Optimized
Enhanced Networking†
Use Cases

3D visualizations, graphics-intensive remote workstation, 3D rendering, application streaming, video encoding, and other server-side graphics workloads.
H1 instances feature up to 16 TB of HDD-based local storage, deliver high disk throughput, and a balance of compute and memory.

Features:

Powered by 2.3 GHz Intel® Xeon® E5 2686 v4 processors (codenamed Broadwell)
Up to 16TB of HDD storage
High disk throughput
ENA enabled Enhanced Networking up to 25 Gbps
Model	vCPU*	Mem (GiB)	Networking Performance	Storage (GB)
h1.2xlarge	8	32	Up to 10 Gigabit	1 x 2,000 HDD
h1.4xlarge	16	64	Up to 10 Gigabit	2 x 2,000 HDD
h1.8xlarge	32	128	10 Gigabit	4 x 2,000 HDD
h1.16xlarge	64	256	25 Gigabit	8 x 2,000 HDD
All instances have the following specs:

2.3 GHz Intel Xeon E5 2686 v4 Processor
Intel AVX†, Intel AVX2†, Intel Turbo
EBS Optimized
Enhanced Networking†
Use Cases

MapReduce-based workloads, distributed file systems such as HDFS and MapR-FS, network file systems, log or data processing applications such as Apache Kafka, and big data workload clusters.
High memory instances are purpose built to run large in-memory databases, including production deployments of SAP HANA, in the cloud.

Features:

Latest generation Intel® Xeon® Platinum 8176M (Skylake) processors
6, 9, and 12 TiB of instance memory, the largest of any EC2 instance
Bare metal performance with direct access to host hardware
EBS-optimized by default at no additional cost
Available in Amazon Virtual Private Clouds (VPCs)
Model	Logical Proc*	Mem (TiB)	Network Perf. (Gbps)	Dedicated EBS Bandwidth (Gbps)	Network Performance
u-6tb1.metal	448	6	25	14	25 Gigabit
u-9tb1.metal	448	9	25	14	25 Gigabit
u-12tb1.metal	448	12	25	14	25 Gigabit
* Each logical processor is a hyperthread on 224 cores

All instances have the following specs:

2. GHz Intel® Xeon® Platinum 8176M (Skylake) processors
Intel AVX, Intel AVX2, Intel Turbo
EBS Optimized
Enhanced Networking†
Use Cases

Ideal for running large enterprise databases, including production installations of SAP HANA in-memory database in the cloud. Certified by SAP for running Business Suite on HANA, the next-generation Business Suite S/4HANA, Data Mart Solutions on HANA, Business Warehouse on HANA, and SAP BW/4HANA in production environments.
This instance family provides Non-Volatile Memory Express (NVMe) SSD-backed instance storage optimized for low latency, very high random I/O performance, high sequential read throughput and provide high IOPS at a low cost. I3 also offers Bare Metal instances (i3.metal), powered by the Nitro System, for non-virtualized workloads, workloads that benefit from access to physical resources, or workloads that may have license restrictions.
Features:

High Frequency Intel Xeon E5-2686 v4 (Broadwell) Processors with base frequency of 2.3 GHz
Up to 25 Gbps of network bandwidth using Elastic Network Adapter (ENA)-based Enhanced Networking
High Random I/O performance and High Sequential Read throughput
Support bare metal instance size for workloads that benefit from direct access to physical processor and memory
Model	vCPU*	Mem (GiB)	Local Storage (GB)	Networking Performance (Gbps)
i3.large	2	15.25	1 x 475 NVMe SSD	Up to 10
i3.xlarge	4	30.5	1 x 950 NVMe SSD	Up to 10
i3.2xlarge	8	61	1 x 1,900 NVMe SSD	Up to 10
i3.4xlarge	16	122	2 x 1,900 NVMe SSD	Up to 10
i3.8xlarge	32	244	4 x 1,900 NVMe SSD	10
i3.16xlarge	64	488	8 x 1,900 NVMe SSD	25
i3.metal	72**	512	8 x 1,900 NVMe SSD	25
 

All instances have the following specs:

2.3 GHz Intel Xeon E5 2686 v4 Processor
Intel AVX†, Intel AVX2†, Intel Turbo
EBS Optimized
Enhanced Networking†
Use Cases

NoSQL databases (e.g. Cassandra, MongoDB, Redis), in-memory databases (e.g. Aerospike), scale-out transactional databases, data warehousing, Elasticsearch, analytics workloads.
This instance family provides dense Non-Volatile Memory Express (NVMe) SSD instance storage optimized for low latency, high random I/O performance, high sequential disk throughput, and offers the lowest price per GB of SSD instance storage on Amazon EC2.

Features:

Up to 60 TB of NVMe SSD instance storage
Up to 100 Gbps of network bandwidth using Elastic Network Adapter (ENA)-based Enhanced Networking
High random I/O performance and high sequential disk throughput
Up to 3.1 GHz Intel® Xeon® Scalable (Skylake) processors with new Intel Advanced Vector Extension (AVX-512) instruction set
Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Model	vCPU	Mem (GiB)	Local Storage (GB)	Network Bandwidth
i3en.large	2	16	1 x 1,250 NVMe SSD
Up to 25 Gbps
i3en.xlarge	4	32	1 x 2,500 NVMe SSD
Up to 25 Gbps
i3en.2xlarge	8	64	2 x 2,500 NVMe SSD
Up to 25 Gbps
i3en.3xlarge	12	96	1 x 7,500 NVMe SSD
Up to 25 Gbps
i3en.6xlarge	24	192	2 x 7,500 NVMe SSD
25 Gbps
i3en.12xlarge	48	384	4 x 7,500 NVMe SSD
50 Gbps
i3en.24xlarge
96	768	8 x 7,500 NVMe SSD
100 Gbps
All instances have the following specs:

3.1 GHz all core turbo Intel® Xeon® Scalable (Skylake) processors
Intel AVX†, Intel AVX2†, Intel AVX-512†, Intel Turbo 
EBS Optimized
Enhanced Networking
Use cases

NoSQL databases (e.g. Cassandra, MongoDB, Redis), in-memory databases (e.g. SAP HANA, Aerospike), scale-out transactional databases, distributed file systems, data warehousing, Elasticsearch, analytics workloads.
!AWS IAM FAQs
''General''
''Q: What is AWS Identity and Access Management (IAM)? ''
You can use AWS IAM to securely control individual and group access to your AWS resources. You can create and manage user identities ("IAM users") and grant permissions for those IAM users to access your resources. You can also grant permissions for users outside of AWS ( federated users).

''Q: How do I get started with IAM? ''
To start using IAM, you must subscribe to at least one of the AWS services that is integrated with IAM. You then can create and manage users, groups, and permissions via IAM APIs, the AWS CLI, or the IAM console, which gives you a point-and-click, web-based interface. You can also use the visual editor to create policies.

''Q: What problems does IAM solve? ''
IAM makes it easy to provide multiple users secure access to your AWS resources. IAM enables you to:
Manage IAM users and their access: You can create users in AWS's identity management system, assign users individual security credentials (such as access keys, passwords, multi-factor authentication devices), or request temporary security credentials to provide users access to AWS services and resources. You can specify permissions to control which operations a user can perform.
Manage access for federated users: You can request security credentials with configurable expirations for users who you manage in your corporate directory, allowing you to provide your employees and applications secure access to resources in your AWS account without creating an IAM user account for them. You specify the permissions for these security credentials to control which operations a user can perform.
''Q: Who can use IAM?''
Any AWS customer can use IAM. The service is offered at no additional charge. You will be charged only for the use of other AWS services by your users.

''Q: What is a user? ''
A user is a unique identity recognized by AWS services and applications. Similar to a login user in an operating system like Windows or UNIX, a user has a unique name and can identify itself using familiar security credentials such as a password or access key. A user can be an individual, system, or application requiring access to AWS services. IAM supports users (referred to as "IAM users") managed in AWS's identity management system, and it also enables you to grant access to AWS resources for users managed outside of AWS in your corporate directory (referred to as "federated users").
 
''Q: What can a user do? ''
A user can place requests to web services such as Amazon S3 and Amazon EC2. A user's ability to access web service APIs is under the control and responsibility of the AWS account under which it is defined. You can permit a user to access any or all of the AWS services that have been integrated with IAM and to which the AWS account has subscribed. If permitted, a user has access to all of the resources under the AWS account. In addition, if the AWS account has access to resources from a different AWS account, its users may be able to access data under those AWS accounts. Any AWS resources created by a user are under control of and paid for by its AWS account. A user cannot independently subscribe to AWS services or control resources.
 
''Q: How do users call AWS services? ''
Users can make requests to AWS services using security credentials. Explicit permissions govern a user's ability to call AWS services. By default, users have no ability to call service APIs on behalf of the account.

''IAM user management''

''Q: How are IAM users managed?''
IAM supports multiple methods to:

Create and manage IAM users.
Create and manage IAM groups.
Manage users' security credentials.
Create and manage policies to grant access to AWS services and resources.
You can create and manage users, groups, and policies by using IAM APIs, the AWS CLI, or the IAM console. You also can use the visual editor and the IAM policy simulator to create and test policies.

''Q: What is a group?''
A group is a collection of IAM users. Manage group membership as a simple list:

Add users to or remove them from a group.
A user can belong to multiple groups.
Groups cannot belong to other groups.
Groups can be granted permissions using access control policies. This makes it easier to manage permissions for a collection of users, rather than having to manage permissions for each individual user.
Groups do not have security credentials, and cannot access web services directly; they exist solely to make it easier to manage user permissions. For details, see Working with Groups and Users.

''Q: What kinds of security credentials can IAM users have?''
IAM users can have any combination of credentials that AWS supports, such as an AWS access key, X.509 certificate, SSH key, password for web app logins, or an MFA device. This allows users to interact with AWS in any manner that makes sense for them. An employee might have both an AWS access key and a password; a software system might have only an AWS access key to make programmatic calls; IAM users might have a private SSH key to access AWS CodeCommit repositories; and an outside contractor might have only an X.509 certificate to use the EC2 command-line interface. For details, see Temporary Security Credentials in the IAM documentation.

''Q: Which AWS services support IAM users?''
You can find the complete list of AWS services that support IAM users in the AWS Services That Work with IAM section of the IAM documentation. AWS plans to add support for other services over time.

''Q: Can I enable and disable user access?''
Yes. You can enable and disable an IAM user's access keys via the IAM APIs, AWS CLI, or IAM console. If you disable the access keys, the user cannot programmatically access AWS services.

''Q: Who is able to manage users for an AWS account?''
The AWS account holder can manage users, groups, security credentials, and permissions. In addition, you may grant permissions to individual users to place calls to IAM APIs in order to manage other users. For example, an administrator user may be created to manage users for a corporation—a recommended practice. When you grant a user permission to manage other users, they can do this via the IAM APIs, AWS CLI, or IAM console.

''Q: Can I structure a collection of users in a hierarchical way, such as in LDAP?''
Yes. You can organize users and groups under paths, similar to object paths in Amazon S3—for example /mycompany/division/project/joe.

''Q: Can I define users regionally?''
Not initially. Users are global entities, like an AWS account is today. No region is required to be specified when you define user permissions. Users can use AWS services in any geographic region.

''Q: How are MFA devices configured for IAM users?''
You (the AWS account holder) can order multiple MFA devices. You can then assign these devices to individual IAM users via the IAM APIs, AWS CLI, or IAM console.

''Q: What kind of key rotation is supported for IAM users?''
User access keys and X.509 certificates can be rotated just as they are for an AWS account's root access identifiers. You can manage and rotate programmatically a user's access keys and X.509 certificates via the IAM APIs, AWS CLI, or IAM console.

''Q: Can IAM users have individual EC2 SSH keys?''
Not in the initial release. IAM does not affect EC2 SSH keys or Windows RDP certificates. This means that although each user has separate credentials for accessing web service APIs, they must share SSH keys that are common across the AWS account under which users have been defined.

''Q: Where can I use my SSH keys?''
Currently, IAM users can use their SSH keys only with AWS CodeCommit to access their repositories.

''Q: Do IAM user names have to be email addresses?''
No, but they can be. User names are just ASCII strings that are unique within a given AWS account. You can assign names using any naming convention you choose, including email addresses.

''Q: Which character sets can I use for IAM user names?''
You can only use ASCII characters for IAM entities.

''Q: Are user attributes other than user name supported?''
Not at this time.

''Q: How are user passwords set?''
You can set an initial password for an IAM user via the IAM console, AWS CLI, or IAM APIs. User passwords never appear in clear text after the initial provisioning, and are never displayed or returned via an API call. IAM users can manage their passwords via the My Password page in the IAM console. Users access this page by selecting the Security Credentials option from the drop-down list in the upper right corner of the AWS Management Console.

''Q: Can I define a password policy for my user’s passwords?''
Yes, you can enforce strong passwords by requiring minimum length or at least one number. You can also enforce automatic password expiration, prevent re-use of old passwords, and require a password reset upon the next AWS sign-in. For details, see Setting an Account Policy Password for IAM Users.

''Q: Can I set usage quotas on IAM users?''
No. All limits are on the AWS account as a whole. For example, if your AWS account has a limit of 20 Amazon EC2 instances, IAM users with EC2 permissions can start instances up to the limit. You cannot limit what an individual user can do.

''IAM role management''
''Q: What is an IAM role?''
An IAM role is an IAM entity that defines a set of permissions for making AWS service requests. IAM roles are not associated with a specific user or group. Instead, trusted entities assume roles, such as IAM users, applications, or AWS services such as EC2.

''Q: What problems do IAM roles solve?''
IAM roles allow you to delegate access with defined permissions to trusted entities without having to share long-term access keys. You can use IAM roles to delegate access to IAM users managed within your account, to IAM users under a different AWS account, or to an AWS service such as EC2.

''Q: How do I get started with IAM roles?''
You create a role in a way similar to how you create a user—name the role and attach a policy to it. For details, see Creating IAM Roles.

''Q: How do I assume an IAM role?''
You assume an IAM role by calling the AWS Security Token Service (STS) AssumeRole APIs (in other words, AssumeRole, AssumeRoleWithWebIdentity, and AssumeRoleWithSAML). These APIs return a set of temporary security credentials that applications can then use to sign requests to AWS service APIs.

''Q: How many IAM roles can I assume?''
There is no limit to the number of IAM roles you can assume, but you can only act as one IAM role when making requests to AWS services.

''Q: Who can use IAM roles?''
Any AWS customer can use IAM roles.

''Q: How much do IAM roles cost?''
IAM roles are free of charge. You will continue to pay for any resources a role in your AWS account consumes.

''Q: How are IAM roles managed?''
You can create and manage IAM roles via the IAM APIs, AWS CLI, or IAM console, which gives you a point-and-click, web-based interface.

Q: What is the difference between an IAM role and an IAM user?
An IAM user has permanent long-term credentials and is used to directly interact with AWS services. An IAM role does not have any credentials and cannot make direct requests to AWS services. IAM roles are meant to be assumed by authorized entities, such as IAM users, applications, or an AWS service such as EC2.

''Q: When should I use an IAM user, IAM group, or IAM role?''
An IAM user has permanent long-term credentials and is used to directly interact with AWS services. An IAM group is primarily a management convenience to manage the same set of permissions for a set of IAM users. An IAM role is an AWS Identity and Access Management (IAM) entity with permissions to make AWS service requests. IAM roles cannot make direct requests to AWS services; they are meant to be assumed by authorized entities, such as IAM users, applications, or AWS services such as EC2. Use IAM roles to delegate access within or between AWS accounts.
''Q: Can I add an IAM role to an IAM group?''
Not at this time.

''Q: How many policies can I attach to an IAM role?''
For inline policies: You can add as many inline policies as you want to a user, role, or group, but the total aggregate policy size (the sum size of all inline policies) per entity cannot exceed the following limits:
User policy size cannot exceed 2,048 characters.
Role policy size cannot exceed 10,240 characters.
Group policy size cannot exceed 5,120 characters.
For managed policies: You can add up to 10 managed policies to a user, role, or group. The size of each managed policy cannot exceed 6,144 characters.

''Q: How many IAM roles can I create?''
You are limited to 1,000 IAM roles under your AWS account. If you need more roles, submit the IAM limit increase request form with your use case, and we will consider your request.

''Q: To which services can my application make requests?''
Your application can make requests to all AWS services that support role sessions.

''Q: What is IAM roles for EC2 instances?''
IAM roles for EC2 instances enables your applications running on EC2 to make requests to AWS services such as Amazon S3, Amazon SQS, and Amazon SNS without you having to copy AWS access keys to every instance. For details, see IAM Roles for Amazon EC2.

''Q: What are the features of IAM roles for EC2 instances?''
IAM roles for EC2 instances provides the following features:

AWS temporary security credentials to use when making requests from running EC2 instances to AWS services.
Automatic rotation of the AWS temporary security credentials.
Granular AWS service permissions for applications running on EC2 instances.
Q: What problem does IAM roles for EC2 instances solve?
IAM roles for EC2 instances simplifies management and deployment of AWS access keys to EC2 instances. Using this feature, you associate an IAM role with an instance. Then your EC2 instance provides the temporary security credentials to applications running on the instance, and the applications can use these credentials to make requests securely to the AWS service resources defined in the role.

''Q: How do I get started with IAM roles for EC2 instances?''
To understand how roles work with EC2 instances, you need to use the IAM console to create a role, launch an EC2 instance that uses that role, and then examine the running instance. You can examine the instance metadata to see how the role credentials are made available to an instance. You can also see how an application that runs on an instance can use the role. For more details, see How Do I Get Started?

''Q: Can I use the same IAM role on multiple EC2 instances?''
Yes.

''Q: Can I change the IAM role on a running EC2 instance?''
Yes. Although a role is usually assigned to an EC2 instance when you launch it, a role can also be assigned to an EC2 instance that is already running. To learn how to assign a role to a running instance, see IAM Roles for Amazon EC2. You can also change the permissions on the IAM role associated with a running instance, and the updated permissions take effect almost immediately.

''Q: Can I associate an IAM role with an already running EC2 instance?''
Yes. You can assign a role to an EC2 instance that is already running. To learn how to assign a role to an already running instance, see IAM Roles for Amazon EC2.

''Q: Can I associate an IAM role with an Auto Scaling group?''
Yes. You can add an IAM role as an additional parameter in an Auto Scaling launch configuration and create an Auto Scaling group with that launch configuration. All EC2 instances launched in an Auto Scaling group that is associated with an IAM role are launched with the role as an input parameter. For more details, see What Is Auto Scaling? in the Auto Scaling Developer Guide.
 
''Q: Can I associate more than one IAM role with an EC2 instance? ''
No. You can only associate one IAM role with an EC2 instance at this time. This limit of one role per instance cannot be increased.
Q: What happens if I delete an IAM role that is associated with a running EC2 instance?
Any application running on the instance that is using the role will be denied access immediately.

''Q: Can I control which IAM roles an IAM user can associate with an EC2 instance?''
Yes. For details, see Permissions Required for Using Roles with Amazon EC2.

''Q: Which permissions are required to launch EC2 instances with an IAM role?''
You must grant an IAM user two distinct permissions to successfully launch EC2 instances with roles:

Permission to launch EC2 instances.
Permission to associate an IAM role with EC2 instances.
For details, see Permissions Required for Using Roles with Amazon EC2.

''Q: Who can access the access keys on an EC2 instance?''
Any local user on the instance can access the access keys associated with the IAM role.

''Q: How do I use the IAM role with my application on the EC2 instance?''
If you develop your application with the AWS SDK, the AWS SDK automatically uses the AWS access keys that have been made available on the EC2 instance. If you are not using the AWS SDK, you can retrieve the access keys from the EC2 instance metadata service. For details, see Using an IAM Role to Grant Permissions to Applications Running on Amazon EC2 Instances.

''Q: How do I rotate the temporary security credentials on the EC2 instance?''
The AWS temporary security credentials associated with an IAM role are automatically rotated multiple times a day. New temporary security credentials are made available no later than five minutes before the existing temporary security credentials expire.

''Q: Can I use IAM roles for EC2 instances with any instance type or Amazon Machine Image?''
Yes. IAM roles for EC2 instances also work in Amazon Virtual Private Cloud (VPC), with spot and reserved instances.

''Q: What is a service-linked role?''
A service-linked role is a type of role that links to an AWS service (also known as a linked service) such that only the linked service can assume the role. Using these roles, you can delegate permissions to AWS services to create and manage AWS resources on your behalf.

''Q: Can I assume a service-linked role?''
No. A service-linked role can be assumed only by the linked service. This is the reason why the trust policy of a service-linked role cannot be modified.

''Q: Can I delete a service-linked role?''
Yes. If you no longer want an AWS service to perform actions on your behalf, you can delete its service-linked role. Before you delete the role, you must delete all AWS resources that depend on the role. This step ensures that you do not inadvertently delete a role required for your AWS resources to function properly.

''Q: How do I delete a service-linked role?''
You can delete a service-linked role from the IAM console. Choose Roles in the navigation pane, choose the service-linked role that you want to delete, and choose Delete role. (Note: For Amazon Lex, you must use the Amazon Lex console to delete the service-linked role.)

''Permissions''

''Q: How do permissions work?''
Access control policies are attached to users, groups, and roles to assign permissions to AWS resources. By default, IAM users, groups, and roles have no permissions; users with sufficient permissions must use a policy to grant the desired permissions.
 
''Q: How do I assign permissions using a policy?''
To set permissions, you can create and attach policies using the AWS Management Console, the IAM API, or the AWS CLI. Users who have been granted the necessary permissions can create policies and assign them to IAM users, groups, and roles.
 
''Q: What are managed policies?''
Managed policies are IAM resources that express permissions using the IAM policy language. You can create, edit, and manage separately from the IAM users, groups, and roles to which they are attached. After you attach a managed policy to multiple IAM users, groups, or roles, you can update that policy in one place and the permissions automatically extend to all attached entities. Managed policies are managed either by you (these are called customer managed policies) or by AWS (these are called AWS managed policies). For more information about managed policies, see Managed Policies and Inline Policies.
 
''Q: How do I create a customer managed policy?''
You can use the visual editor or the JSON editor in the IAM console. The visual editor is a point-and-click editor that guides you through the process of granting permissions in a policy without requiring you to write the policy in JSON. You can create policies in JSON by using the CLI and SDK.
 
''Q: How do I assign commonly used permissions?''
AWS provides a set of commonly used permissions that you can attach to IAM users, groups, and roles in your account. These are called AWS managed policies. One example is read-only access for Amazon S3. When AWS updates these policies, the permissions are applied automatically to the users, groups, and roles to which the policy is attached. AWS managed policies automatically appear in the Policies section of the IAM console. When you assign permissions, you can use an AWS managed policy or you can create your own customer managed policy. Create a new policy based on an existing AWS managed policy, or define your own.
 
''Q: How do group-based permissions work?''
Use IAM groups to assign the same set of permissions to multiple IAM users. A user can also have individual permissions assigned to them. The two ways to attach permissions to users work together to set overall permissions.
 
Q: What is the difference between assigning permissions using IAM groups and assigning permissions using managed policies?
Use IAM groups to collect IAM users and define common permissions for those users. Use managed policies to share permissions across IAM users, groups, and roles. For example, if you want a group of users to be able to launch an Amazon EC2 instance, and you also want the role on that instance to have the same permissions as the users in the group, you can create a managed policy and assign it to the group of users and the role on the Amazon EC2 instance.
 
''Q: How are IAM policies evaluated in conjunction with Amazon S3, Amazon SQS, Amazon SNS, and AWS KMS resource-based policies?''
IAM policies are evaluated together with the service’s resource-based policies. When a policy of any type grants access (without explicitly denying it), the action is allowed. For more information about the policy evaluation logic, see IAM Policy Evaluation Logic.
 
''Q: Can I use a managed policy as a resource-based policy?''
Managed policies can only be attached to IAM users, groups, or roles. You cannot use them as resource-based policies.
 
''Q: How do I set granular permissions using policies?''
Using policies, you can specify several layers of permission granularity. First, you can define specific AWS service actions you wish to allow or explicitly deny access to. Second, depending on the action, you can define specific AWS resources the actions can be performed on. Third, you can define conditions to specify when the policy is in effect (for example, if MFA is enabled or not).
 
''Q: How can I easily remove unnecessary permissions?''
To help you determine which permissions are needed, the IAM console now displays service last accessed data that shows the hour when an IAM entity (a user, group, or role) last accessed an AWS service. Knowing if and when an IAM entity last exercised a permission can help you remove unnecessary permissions and tighten your IAM policies with less effort.
 
''Q: Can I grant permissions to access or change account-level information (for example, payment instrument, contact email address, and billing history)? ''
Yes, you can delegate the ability for an IAM user or a federated user to view AWS billing data and modify AWS account information. For more information about controlling access to your billing information, see Controlling Access.
 
''Q: Who can create and manage access keys in an AWS account?''
Only the AWS account owner can manage the access keys for the root account. The account owner and IAM users or roles that have been granted the necessary permissions can manage access keys for IAM users.
 
''Q: Can I grant permissions to access AWS resources owned by another AWS account? ''
Yes. Using IAM roles, IAM users and federated users can access resources in another AWS account via the AWS Management Console, the AWS CLI, or the APIs. See Manage IAM Roles for more information.
 
''Q: What does a policy look like?''
The following policy grants access to add, update, and delete objects from a specific folder, example_folder, in a specific bucket, example_bucket.
{ 
  "Version":"2012-10-17", 
  "Statement":[ 
   { 
     "Effect":"Allow", 
     "Action":[ 
       "s3:PutObject", 
       "s3:GetObject", 
       "s3:GetObjectVersion", 
       "s3:DeleteObject", 
       "s3:DeleteObjectVersion" 
     ],
 
     "Resource":"arn:aws:s3:::example_bucket/example_folder/*" 
    } 
  ] 
} 
 
''Q: What is a policy summary?''
If you are using the IAM console and choose a policy, you will see a policy summary. A policy summary lists the access level, resources, and conditions for each service defined in a policy (see the following screenshot for an example). The access level (View, Read, Write, or Permissions management) is defined by actions granted for each service in the policy. You can view the policy in JSON by choosing the JSON button.

===> need fig here

''Policy simulator''

''Q: What is the IAM policy simulator?''
The IAM policy simulator is a tool to help you understand, test, and validate the effects of your access control policies.

''Q: What can the policy simulator be used for?''
You can use the policy simulator in several ways. You can test policy changes to ensure they have the desired effect before committing them to production. You can validate existing policies attached to users, groups, and roles to verify and troubleshoot permissions. You can also use the policy simulator to understand how IAM policies and resource-based policies work together to grant or deny access to AWS resources.

''Q: Who can use the policy simulator? ''
The policy simulator is available to all AWS customers.

''Q: How much does the policy simulator cost?''
The policy simulator is available at no extra cost.

''Q: How do I get started? ''
Go to https://policysim.aws.amazon.com, or click the link on the IAM console under “Additional Information.” Specify a new policy or choose an existing set of policies from a user, group, or role that you’d like to evaluate. Then select a set of actions from the list of AWS services, provide any required information to simulate the access request, and run the simulation to determine whether the policy allows or denies permissions to the selected actions and resources. To learn more about the IAM policy simulator, watch our Getting Started video or see the documentation.

''Q: What kinds of policies does the IAM policy simulator support?''
The policy simulator supports testing of newly entered policies and existing policies attached to users, groups, or roles. In addition, you can simulate whether resource-level policies grant access to a particular resource for Amazon S3 buckets, Amazon Glacier vaults, Amazon SNS topics, and Amazon SQS queues. These are included in the simulation when an Amazon Resource Name (ARN) is specified in the Resource field in Simulation Settings for a service that supports resource policies.

''Q: If I change a policy in the policy simulator, do those changes persist in production?''
No. To apply changes to production, copy the policy that you’ve modified in the policy simulator and attach it to the desired IAM user, group, or role.

''Q: Can I use the policy simulator programmatically?''
Yes. You can use the policy simulator using the AWS SDKs or AWS CLI in addition to the policy simulator console. Use the iam:SimulatePrincipalPolicy API to programmatically test your existing IAM policies. To test the effects of new or updated policies that are not yet attached to a user, group, or role, call the iam:SimulateCustomPolicy API.  

''Signing in''
Q: How does an IAM user sign in?
To sign in to the AWS Management Console as an IAM user, you must provide your account ID or account alias in addition to your user name and password. When your administrator created your IAM user in the console, they should have provided you with your user name and the URL to your account sign-in page. That URL includes your account ID or account alias.
 
https://My_AWS_Account_ID.signin.aws.amazon.com/console/
 
You can also sign in at the following general sign-in endpoint and type your account ID or account alias manually:
 
https://console.aws.amazon.com/
 
For convenience, the AWS sign-in page uses a browser cookie to remember the IAM user name and account information. The next time the user goes to any page in the AWS Management Console, the console uses the cookie to redirect the user to the account sign-in page.
 
Note: IAM users can still use the URL link provided to them by their administrator to sign in to the AWS Management Console.
 
Q: What is an AWS account alias?
The account alias is a name you define to make it more convenient to identify your account. You can create an alias using the IAM APIs, AWS Command Line Tools, or the IAM console. You can have one alias per AWS account.
 
Q: Which AWS sites can IAM users access?
IAM users can sign in to the following AWS sites:
AWS Management Console
AWS Forums
AWS Support Center
AWS Marketplace
Q: Can IAM users sign in to other Amazon.com properties with their credentials? 
No. Users created with IAM are recognized only by AWS services and applications.
 
Q: Is there an authentication API to verify IAM user sign-ins? 
No. There is no programmatic way to verify user sign-ins.
 
Q: Can users SSH to EC2 instances using their AWS user name and password? 
No. User security credentials created with IAM are not supported for direct authentication to customer EC2 instances. Managing EC2 SSH credentials is the customer’s responsibility within the EC2 console.

''Temporary security credentials''

Q: What are temporary security credentials? 
Temporary security credentials consist of the AWS access key ID, secret access key, and security token. Temporary security credentials are valid for a specified duration and for a specific set of permissions. Temporary security credentials are sometimes simply referred to as tokens. Tokens can be requested for IAM users or for federated users you manage in your own corporate directory. For more information, see Common Scenarios for Temporary Credentials.
 
Q: What are the benefits of temporary security credentials? 
Temporary security credentials allow you to:
Extend your internal user directories to enable federation to AWS, enabling your employees and applications to securely access AWS service APIs without needing to create an AWS identity for them.
Request temporary security credentials for an unlimited number of federated users.
Configure the time period after which temporary security credentials expire, offering improved security when accessing AWS service APIs through mobile devices where there is a risk of losing the device.
 
Q: How can I request temporary security credentials for federated users? 
You can call the GetFederationToken, AssumeRole, AssumeRoleWithSAML, or AssumeRoleWithWebIdentity STS APIs.
 
Q: How can IAM users request temporary security credentials for their own use? 
IAM users can request temporary security credentials for their own use by calling the AWS STS GetSessionToken API. The default expiration for these temporary credentials is 12 hours; the minimum is 15 minutes, and the maximum is 36 hours.
You can also use temporary credentials with Multi-Factor Authentication (MFA)-Protected API Access.
 
Q: How can I use temporary security credentials to call AWS service APIs? 
If you're making direct HTTPS API requests to AWS, you can sign those requests with the temporary security credentials that you get from AWS Security Token Service (AWS STS). To do this, do the following:
Use the access key ID and secret access key that are provided with the temporary security credentials the same way you would use long-term credentials to sign a request. For more information about signing HTTPS API requests, see Signing AWS API Requests in the AWS General Reference.
Use the session token that is provided with the temporary security credentials. Include the session token in the "x-amz-security-token" header. See the following example request.
For Amazon S3, via the "x-amz- security-token" HTTP header.
For other AWS services, via the SecurityToken parameter.
 
Q: Which AWS services accept temporary security credentials? 
For a list of supported services, see AWS Services That Work with IAM.
 
Q: What is the maximum size of the access policy that I can specify when requesting temporary security credentials (either GetFederationToken or AssumeRole)? 
The policy plaintext must be 2048 bytes or shorter. However, an internal conversion compresses it into a packed binary format with a separate limit.
 
Q: Can a temporary security credential be revoked prior to its expiration? 
No. When requesting temporary credentials, we recommend the following:
When creating temporary security credentials, set the expiration to a value that is appropriate for your application.
Because root account permissions cannot be restricted, use an IAM user and not the root account for creating temporary security credentials. You can revoke permissions of the IAM user that issued the original call to request it. This action almost immediately revokes privileges for all temporary security credentials issued by that IAM user
 
Q: Can I reactivate or extend the expiration of temporary security credentials? 
No. It is a good practice to actively check the expiration and request a new temporary security credential before the old one expires. This rotation process is automatically managed for you when temporary security credentials are used in roles for EC2 instances.
 
Q: Are temporary security credentials supported in all regions? 
Customers can request tokens from AWS STS endpoints in all regions, including AWS GovCloud (US) and China (Beijing) regions. Temporary credentials from AWS GovCloud (US) and China (Beijing) can be used only in the region from which they originated. Temporary credentials requested from any other region such as US East (N. Virginia) or EU (Ireland) can be used in all regions except AWS GovCloud (US) and China (Beijing).
 
Q: Can I restrict the use of temporary security credentials to a region or a subset of regions?
No. You cannot restrict the temporary security credentials to a particular region or subset of regions, except the temporary security credentials from AWS GovCloud (US) and China (Beijing), which can be used only in the respective regions from which they originated.
 
Q: What do I need to do before I can start using an AWS STS endpoint?
AWS STS endpoints are active by default in all regions and you can start using them without any further actions.
 
Q: What happens if I try to use a regional AWS STS endpoint that has been deactivated for my AWS account?
If you attempt to use a regional AWS STS endpoint that has been deactivated for your AWS account, you will see an AccessDenied exception from AWS STS with the following message: “AWS STS is not activated in this region for account: AccountID. Your account administrator can activate AWS STS in this region using the IAM console.”
 
Q: What permissions are required to activate or deactivate AWS STS regions from the Account Settings page?
Only users with at least iam:* permissions can activate or deactivate AWS STS regions from the Account Settings page in the IAM console. Note that the AWS STS endpoints in US East (N. Virginia), AWS GovCloud (US), and China (Beijing) regions are always active and cannot be deactivated.
 
Q: Can I use the API or CLI to activate or deactivate AWS STS regions?
No. There is no API or CLI support at this time to activate or deactivate AWS STS regions. We plan to provide API and CLI support in a future release.

''Identity federation''
Q: What is identity federation? 
AWS Identity and Access Management (IAM) supports identity federation for delegated access to the AWS Management Console or AWS APIs. With identity federation, external identities are granted secure access to resources in your AWS account without having to create IAM users. These external identities can come from your corporate identity provider (such as Microsoft Active Directory or from the AWS Directory Service) or from a web identity provider (such as Amazon Cognito, Login with Amazon, Facebook, Google, or any OpenID Connect-compatible provider).
 
Q: What are federated users? 
Federated users (external identities) are users you manage outside of AWS in your corporate directory, but to whom you grant access to your AWS account using temporary security credentials. They differ from IAM users, which are created and maintained in your AWS account.
 
Q: Do you support SAML? 
Yes, AWS supports the Security Assertion Markup Language (SAML) 2.0.
 
Q: What SAML profiles does AWS support? 
The AWS single sign-on (SSO) endpoint supports the IdP-initiated HTTP-POST binding WebSSO SAML Profile. This enables a federated user to sign in to the AWS Management Console using a SAML assertion. A SAML assertion can also be used to request temporary security credentials using the AssumeRoleWithSAML API. For more information, see About SAML 2.0-Based Federation.
 
Q: Can federated users access AWS APIs? 
Yes. You can programmatically request temporary security credentials for your federated users to provide them secure and direct access to AWS APIs. We have provided a sample application that demonstrates how you can enable identity federation, providing users maintained by Microsoft Active Directory access to AWS service APIs. For more information, see Using Temporary Security Credentials to Request Access to AWS Resources.
 
Q: Can federated users access the AWS Management Console? 
Yes. There are a couple ways to achieve this. One way is by programmatically requesting temporary security credentials (such as GetFederationToken or AssumeRole) for your federated users and including those credentials as part of the sign-in request to the AWS Management Console. After you have authenticated a user and granted them temporary security credentials, you generate a sign-in token that is used by the AWS single sign-on (SSO) endpoint. The user’s actions in the console are limited to the access control policy associated with the temporary security credentials. For more details, see Creating a URL that Enables Federated Users to Access the AWS Management Console (Custom Federation Broker).
Alternatively, you can post a SAML assertion directly to AWS sign-in ( https://signin.aws.amazon.com/saml). The user’s actions in the console are limited to the access control policy associated with the IAM role that is assumed using the SAML assertion. For more details, see Enabling SAML 2.0 Federated Users to Access the AWS Management Console.
Using either approach allows a federated user to access the console without having to sign in with a user name and password. We have provided a sample application that demonstrates how you can enable identity federation, providing users maintained by Microsoft Active Directory access to the AWS Management Console.
 
Q: How do I control what a federated user is allowed to do when signed in to the console? 
When you request temporary security credentials for your federated user using an AssumeRole API, you can optionally include an access policy with the request. The federated user’s privileges are the intersection of permissions granted by the access policy passed with the request and the access policy attached to the IAM role that was assumed. The access policy passed with the request cannot elevate the privileges associated with the IAM role being assumed. When you request temporary security credentials for your federated user using the GetFederationToken API, you must provide an access control policy with the request. The federated user’s privileges are the intersection of the permissions granted by the access policy passed with the request and the access policy attached to the IAM user that was used to make the request. The access policy passed with the request cannot elevate the privileges associated with the IAM user used to make the request. These federated user permissions apply to both API access and actions taken within the AWS Management Console.
 
Q: What permissions does a federated user need to use the console? 
A user requires permissions to the AWS service APIs called by the AWS Management Console. Common permissions required to access AWS services are documented in Using Temporary Security Credentials to Request Access to AWS Resources.
 
Q: How do I control how long a federated user has access to the AWS Management Console? 
Depending on the API used to create the temporary security credentials, you can specify a session limit between 15 minutes and 36 hours (for GetFederationToken and GetSessionToken) and between 15 minutes and 12 hours (for AssumeRole* APIs), during which time the federated user can access the console. When the session expires, the federated user must request a new session by returning to your identity provider, where you can grant them access again. Learn more about setting session duration.
 
Q: What happens when the identity federation console session times out? 
The user is presented with a message stating that the console session has timed out and that they need to request a new session. You can specify a URL to direct users to your local intranet web page where they can request a new session. You add this URL when you specify an Issuer parameter as part of your sign-in request. For more information, see Enabling SAML 2.0 Federated Users to Access the AWS Management Console.
 
Q: How many federated users can I give access to the AWS Management Console? 
There is no limit to the number of federated users who can be given access to the console.
 
Q: What is web identity federation?
Web identity federation allows you to create AWS-powered mobile apps that use public identity providers (such as Amazon Cognito, Login with Amazon, Facebook, Google, or any OpenID Connect-compatible provider) for authentication. With web identity federation, you have an easy way to integrate sign-in from public identity providers (IdPs) into your apps without having to write any server-side code and without distributing long-term AWS security credentials with the app.
For more information about web identity federation and to get started, see About Web Identity Federation. 
 
Q: How do I enable web identity federation with accounts from public IdPs?
For best results, use Amazon Cognito as your identity broker for almost all web identity federation scenarios. Amazon Cognito is easy to use and provides additional capabilities such as anonymous (unauthenticated) access, and synchronizing user data across devices and providers. However, if you have already created an app that uses web identity federation by manually calling the AssumeRoleWithWebIdentity API, you can continue to use it and your apps will still work.
Here are the basic steps to enable identify federation using one of the supported web IdPs:
Sign up as a developer with the IdP and configure your app with the IdP, who gives you a unique ID for your app. 
If you use an IdP that is compatible with OIDC, create an identity provider entity for it in IAM. 
In AWS, create one or more IAM roles. 
In your application, authenticate your users with the public IdP. 
In your app, make an unsigned call to the AssumeRoleWithWebidentity API to request temporary security credentials. 
Using the temporary security credentials you get in the AssumeRoleWithWebidentity response, your app makes signed requests to AWS APIs. 
Your app caches the temporary security credentials so that you do not have to get new ones each time the app needs to make a request to AWS. 
For more detailed steps, see Using Web Identity Federation APIs for Mobile Apps.
 
Q: How does identity federation using AWS Directory Service differ from using a third-party identity management solution?
If you want your federated users to be able to access only the AWS Management Console, using AWS Directory Service provides similar capabilities compared to using a third-party identity management solution. End users are able to sign in using their existing corporate credentials and access the AWS Management Console. Because AWS Directory Service is a managed service, customers do not need to set up or manage federation infrastructure, but rather need to create an AD Connector directory to integrate with their on-premises directory. If you are interested in providing your federated users access to AWS APIs, use a third-party offering, or deploy your own proxy server.

''Billing''
Q: Does AWS Billing provide aggregated usage and cost breakdowns by user? 
No, this is not currently supported.
 
Q: Does the IAM service cost anything? 
No, this is a feature of your AWS account provided at no additional charge.
 
Q: Who pays for usage incurred by users under an AWS Account? 
The AWS account owner controls and is responsible for all usage, data, and resources under the account.
 
Q: Is billable user activity logged in AWS usage data? 
Not currently. This is planned for a future release.
 
Q: How does IAM compare with Consolidated Billing? 
IAM and Consolidated Billing are complementary features. Consolidated Billing enables you to consolidate payment for multiple AWS accounts within your company by designating a single paying account. The scope of IAM is not related to Consolidated Billing. A user exists within the confines of an AWS account and does not have permissions across linked accounts. For more details, see Paying Bills for Multiple Accounts Using Consolidated Billing.
 
Q: Can a user access the AWS accounts billing information? 
Yes, but only if you let them. In order for IAM users to access billing information, you must first grant access to the Account Activity or Usage Reports. See Controlling Access.

''Additional questions''
Q: What happens if a user tries to access a service that has not yet been integrated with IAM? 
The service returns an “Access denied” error.
 
Q: Are IAM actions logged for auditing purposes?
Yes. You can log IAM actions, STS actions, and AWS Management Console sign-ins by activating AWS CloudTrail. To learn more about AWS logging, see AWS CloudTrail.
 
Q: Is there any distinction between people and software agents as AWS entities?
No, both of these entities are treated like users with security credentials and permissions. However, people are the only ones to use a password in the AWS Management Console.
 
Q: Do users work with AWS Support Center and Trusted Advisor? 
Yes, IAM users have the ability to create and modify support cases as well as use Trusted Advisor.
 
Q: Are there any default quota limits associated with IAM? 
Yes, by default your AWS account has initial quotas set for all IAM-related entities. For details see Limitations on IAM Entities and Objects.
 
These quotas are subject to change. If you require an increase, you can access the Service Limit Increase form via the Contact Us page, and choose IAM Groups and Users from the Limit Type drop-down list.

''Multi-factor authentication''
Q. What is AWS MFA?
AWS multi-factor authentication (AWS MFA) provides an extra level of security that you can apply to your AWS environment. You can enable AWS MFA for your AWS account and for individual AWS Identity and Access Management (IAM) users you create under your account.

Q. How does AWS MFA work?
There are two primary ways to authenticate using an AWS MFA device:

AWS Management Console users: When a user with MFA enabled signs in to an AWS website, they are prompted for their user name and password (the first factor–what they know), and an authentication response from their AWS MFA device (the second factor–what they have). All AWS websites that require sign-in, such as the AWS Management Console, fully support AWS MFA. You can also use AWS MFA together with Amazon S3 secure delete for additional protection of your S3 stored versions.
AWS API users: You can enforce MFA authentication by adding MFA restrictions to your IAM policies. To access APIs and resources protected in this way, developers can request temporary security credentials and pass optional MFA parameters in their AWS Security Token Service (STS) API requests (the service that issues temporary security credentials). MFA-validated temporary security credentials can be used to call MFA-protected APIs and resources. Note: AWS STS and MFA-protected APIs do not currently support U2F security key as MFA.
Q. How do I help protect my AWS resources with MFA?
Follow two easy steps:

1. Get a MFA device. You have three options:

Purchase a hardware YubiKey security key from Yubico, a third-party provider.
Purchase a hardware device from Gemalto, a third-party provider.
Install a virtual MFA–compatible application on a device such as your smartphone.
Visit the AWS MFA page for details about how to acquire a hardware or virtual MFA device.
2. After you have a MFA device, you must activate it in the IAM console. You can also use the AWS CLI to activate virtual MFA and hardware MFA (Gemalto device) for an IAM user. Note: AWS CLI does not currently support activation of U2F security keys.

Q. Is there a fee associated with using AWS MFA?
AWS does not charge any additional fees for using AWS MFA with your AWS account. However, if you want to use a physical MFA device then you will need to purchase the MFA device that is compatible with AWS MFA either from Gemalto or Yubico, third party providers. For more details, please visit Yubico or Gemalto’s website.
Q. Can I have multiple MFA devices active for my AWS account?
Yes. Each IAM user can have its own MFA device. However, each identity (IAM user or root account) can be associated with only one MFA device.
Q. Can I use my U2F security key with multiple AWS accounts?

Yes. AWS allows you to use the same U2F security key with several root and IAM users across multiple accounts.

Q. Can I use virtual, hardware, or SMS MFA with multiple AWS accounts?
No. The MFA device or mobile phone number associated to virtual, hardware, and SMS MFA is bound to an individual AWS identity (IAM user or root account). If you have a TOTP-compatible application installed on your smartphone, you can create multiple virtual MFA devices on the same smartphone. Each one of the virtual MFA devices is bound to a single identity, just like hardware MFA (Gemalto) device. If you dissociate (deactivate) the MFA device, you can then reuse it with a different AWS identity. The MFA device associated to hardware MFA cannot currently be used by more than one identity simultaneously.
Q. I already have a hardware MFA device (Gemalto) from my place of work or from another service I use, can I re-use this device with AWS MFA?
No. AWS MFA relies on knowing a unique secret associated with your hardware MFA (Gemalto) device in order to support its use. Because of security constraints that mandate such secrets never be shared between multiple parties, AWS MFA cannot support the use of your existing Gemalto device. Only a compatible hardware MFA device purchased from Gemalto can be used with AWS MFA. You can re-use an existing U2F security key with AWS MFA, as U2F security keys do not share any secrets between multiple parties.
Purchasing an MFA Device

Q. I’m having a problem with an order for a MFA device using the third-party provider’s website. Where can I get help?
Yubico or Gemalto's customer service can assist you.
Q. I received a defective or damaged MFA device from the third party provider. Where can I get help?
Yubico or Gemalto's customer service can assist you.
Q. I just received a MFA device from the third party provider. What should I do?
You simply need to activate the MFA device to enable AWS MFA for your AWS account. See the IAM console to perform this task.
Provisioning a Virtual MFA Device

Q. What is a virtual MFA device?
A virtual MFA device is an entry created in a TOTP compatible software application that can generate six-digit authentication codes. The software application can run on any compatible computing device, such as a smartphone.
Q. What are the differences between a virtual MFA device and physical MFA devices?
Virtual MFA devices use the same protocols as the physical MFA devices. Virtual MFA devices are software based and can run on your existing devices such as smartphones. Most virtual MFA applications also allow you to enable more than one virtual MFA device, which makes them more convenient than physical MFA devices.

Q. Which virtual MFA applications can I use with AWS MFA?
You can use applications that generate TOTP-compliant authentication codes, such as the Google Authenticator application, with AWS MFA. You can provision virtual MFA devices either automatically by scanning a QR code with the device's camera or by manual seed entry in the virtual MFA application.
Visit the MFA page for a list of supported virtual MFA applications.

Q. What is a QR code?
A QR code is a two-dimensional barcode that is readable by dedicated QR barcode readers and most smartphones. The code consists of black squares arranged in larger square patterns on a white background. The QR code contains the required security configuration information to provision a virtual MFA device in your virtual MFA application.

Q. How do I provision a new virtual MFA device?
You can configure a new virtual MFA device in the IAM console for your IAM users as well as for your AWS root account. You can also use the aws iam create-virtual-mfa-device command in the AWS CLI or the CreateVirtualMFADevice API to provision new virtual MFA devices under your account. The aws iam create-virtual-mfa-device and the CreateVirtualMFADevice API return the required configuration information, called a seed, to configure the virtual MFA device in your AWS MFA compatible application. You can either grant your IAM users the permissions to call this API directly or perform the initial provisioning for them.
Q. How should I handle and distribute the seed material for virtual MFA devices?

You should treat seed material like any other secret (for example the AWS secret keys and passwords).

Q. How can I enable an IAM user to manage virtual MFA devices under my account?
Grant the IAM user the permission to call the CreateVirtualMFADevice API. You can use this API to provision new virtual MFA devices.
SMS MFA

Q. Can I still request preview access to the SMS MFA?

We are no longer accepting new participants for the SMS MFA preview. We encourage you to use MFA on your AWS account by using a U2F security key, hardware device, or virtual (software-based) MFA device.

Q. When will the preview for SMS MFA end?

On February 1, 2019, AWS will no longer require IAM users to enter an MFA six-digit code if the IAM user is setup with “An SMS MFA device”. These users will also no longer be provided an SMS code when they sign in. We encourage you to use MFA through a U2F security key, hardware device, or virtual (software-based) MFA device. You can continue using this feature until January 31, 2019.
Enabling AWS MFA Devices

Q. Where do I enable AWS MFA?
You can enable AWS MFA for an AWS account and your IAM users in the IAM console, the AWS CLI, or by calling the AWS API. Note: AWS CLI and AWS API do not currently support enabling U2F security key.
Q. What information do I need to activate a hardware or virtual MFA device?
If you are activating the MFA device with the IAM console then you only need the device. If you are using the AWS CLI or the IAM API then you need the following:

1. The serial number of the MFA device. The format of the serial number depends on whether you are using a hardware device or a virtual device:

Hardware MFA device: The serial number is on the bar-coded label on the back of the device.
Virtual MFA device: The serial number is the Amazon Resource Name (ARN) value returned when you run the iam-virtualmfadevicecreate command in the AWS CLI or call the CreateVirtualMFADevice API.
2. Two consecutive MFA codes displayed by the MFA device.

Q. My MFA device seems to be working normally, but I am not able to activate it. What should I do?
Please contact us for help.
Using AWS MFA

Q. If I enable AWS MFA for my AWS root account or my IAM users, do they always have to use MFA to sign in to the AWS Management Console?
Yes. The AWS root credential user and IAM users must have their MFA device with them any time they need to sign in to any AWS website.

If your MFA device is lost, damaged, stolen, or not working, you can sign in using alternative factors of authentication, deactivate the MFA device, and activate a new device. As a security best practice, we recommend that you change your root account's password.

If your IAM users lose or damage their MFA device, or if it is stolen or stops working, you can disable AWS MFA yourself by using the IAM console or the AWS CLI.

Q. If I enable AWS MFA for my AWS root account or IAM users, do they always need to complete the MFA challenge to directly call AWS APIs?
No, it’s optional. However, you must complete the MFA challenge if you plan to call APIs that are secured by MFA-protected API access.

If you are calling AWS APIs using access keys for your AWS root account or IAM user, you do not need to enter an MFA code. For security reasons, we recommend that you remove all access keys from your AWS root account and instead call AWS APIs with the access keys for an IAM user that has the required permissions.

Note: U2F security keys currently do not work with MFA-protected APIs and currently cannot be used as MFA for AWS APIs.
Q. How do I sign in to the AWS Portal and AWS Management Console using my MFA device?
Follow these two steps:

If you are signing in as an AWS root account, sign in as usual with your user name and password when prompted. To sign in as an IAM user, use the account-specific URL and provide your user name and password when prompted.
If you have enabled virtual, hardware, or SMS MFA, enter the six-digit MFA code that appears on your MFA device. If you have enabled U2F security key, insert the key into the USB port of your computer, wait for the key to blink, and then touch the button or gold disk on your key.
Q. Does AWS MFA affect how I access AWS Service APIs?
AWS MFA changes the way IAM users access AWS Service APIs only if the account administrator(s) choose to enable MFA-protected API access. Administrators may enable this feature to add an extra layer of security over access to sensitive APIs by requiring that callers authenticate with an AWS MFA device. For more information, see the MFA-protected API access documentation in more detail.
Other exceptions include S3 PUT bucket versioning, GET bucket versioning, and DELETE object APIs, which allow you to require MFA authentication to delete or change the versioning state of your bucket. For more information see the S3 documentation discussing Configuring a Bucket with MFA Delete in more detail.
For all other cases, AWS MFA does not currently change the way you access AWS service APIs.

Note: U2F security keys currently do not work with MFA-protected APIs and currently cannot be used as MFA for AWS APIs.

Q. For virtual and hardware MFA, can I use a given MFA code more than once?
No. For security reasons, you can use each MFA code provided by your virtual and hardware MFA device only once.
Q. I was recently asked to resync my MFA device because my MFA codes were being rejected. Should I be concerned?
No, this can happen occasionally. Virtual and hardware MFA relies on the clock in your MFA device being in sync with the clock on our servers. Sometimes, these clocks can drift apart. If this happens, when you use the MFA device to sign in to access secure pages on the AWS website or the AWS Management Console, AWS automatically attempts to resync the MFA device by requesting that you provide two consecutive MFA codes (just as you did during activation).

U2F security keys do not go out of sync and do not need a resync.
Q. My MFA device seems to be working normally, but I am not able to use it to sign in to the AWS Management Console. What should I do?
If you are using virtual or hardware MFA, we suggest you resynchronize MFA devices for your IAM user's credentials. If you already tried to resync and are still having trouble signing in, you can sign in using alternate factors of authentication and reset your MFA device.

If you are using U2F security keys, you can sign in using alternate factors of authentication and reset your MFA device.

If you are still encountering issues, contact us for help.
Q. My MFA device is lost, damaged, stolen, or not working, and now I can’t sign in to the AWS Management Console. What should I do?
If your MFA device is associated with an AWS root account:

You can reset your MFA device on the AWS Management Console by first signing in with your password and then verifying the email address and phone number associated with your root account.

If your MFA device is lost, damaged, stolen or not working, you can sign in using alternative factors of authentication, deactivate the MFA device, and activate a new MFA device. As a security best practice, we recommend that you change your root account’s password.

If you need a new MFA device, you can purchase a new MFA device from a third-party provider, Yubico or Gemalto, or provision a new virtual MFA device under your account by using the IAM console.

If you have tried the preceding approaches and are still having trouble signing in, contact AWS Support.

Q. How do I disable AWS MFA?

To disable AWS MFA for your AWS account, you can deactivate your MFA device using the Security Credentials page. To disable AWS MFA for your IAM users, you need to use the IAM console or the AWS CLI.
Q. Can I use AWS MFA in GovCloud?
Yes, you can use AWS virtual MFA and hardware MFA devices in GovCloud.
MFA-protected API access

Q. What is MFA-protected API access?
MFA-protected API access is optional functionality that lets account administrators enforce additional authentication for customer-specified APIs by requiring that users provide a second authentication factor in addition to a password. Specifically, it enables administrators to include conditions in their IAM policies that check for and require MFA authentication for access to selected APIs. Users making calls to those APIs must first get temporary credentials that indicate the user entered a valid MFA code.

Q. Can I use my U2F security key with MFA-protected APIs?

No. MFA-protected APIs currently do not support U2F security keys.

Q. What problem does MFA-protected API access solve?
Previously, customers could require MFA for access to the AWS Management Console, but could not enforce MFA requirements on developers and applications interacting directly with AWS service APIs. MFA-protected API access ensures that IAM policies are universally enforced regardless of access path. As a result, you can now develop your own application that uses AWS and prompts the user for MFA authentication before calling powerful APIs or accessing sensitive resources.

Q. How do I get started with MFA-protected API access?
You can get started in two simple steps:

Assign an MFA device to your IAM users. You can purchase a hardware key fob, or download a free TOTP-compatible application for your smartphone, tablet, or computer. See the MFA detail page for more information on AWS MFA devices.
Enable MFA-protected API access by creating permission policies for the IAM users and/or IAM groups from which you want to require MFA authentication. To learn more about access policy language syntax, see the access policy language documentation.
Q. How do developers and users access APIs and resources secured with MFA-protected API access?
Developers and users interact with MFA-protected API access both in the AWS Management Console and at the APIs.

In the AWS Management Console, any MFA-enabled IAM user must authenticate with their device to sign in. Users that do not have MFA do not receive access to MFA-protected APIs and resources.

At the API level, developers can integrate AWS MFA into their applications to prompt users to authenticate using their assigned MFA devices before calling powerful APIs or accessing sensitive resources. Developers enable this functionality by adding optional MFA parameters (serial number and MFA code) to requests to obtain temporary security credentials (such requests are also referred to as “session requests”). If the parameters are valid, temporary security credentials that indicate MFA status are returned. See the temporary security credentials documentation for more information.
Q. Who can use MFA-protected API access?
MFA-protected API access is available for free to all AWS customers.

Q. Which services does MFA-protected API access work with?
MFA-protected API access is supported by all AWS services that support temporary security credentials. For a list of supported services, see AWS Services that Work with IAM and review the column labeled Supports temporary security credentials.
Q. What happens if a user provides incorrect MFA device information when requesting temporary security credentials?
The request to issue temporary security credentials fails. Temporary security credential requests that specify MFA parameters must provide the correct serial number of the device linked to the IAM user as well as a valid MFA code.

Q. Does MFA-protected API access control API access for AWS root accounts?
No, MFA-protected API access only controls access for IAM users. Root accounts are not bound by IAM policies, which is why we recommend that you create IAM users to interact with AWS service APIs rather than use AWS root account credentials.

Q. Do users have to have an MFA device assigned to them in order to use MFA-protected API access?
Yes, a user must first be assigned a unique hardware or virtual MFA device.

Q. Is MFA-protected API access compatible with S3 objects, SQS queues, and SNS topics?
Yes.

Q. How does MFA-protected API access interact with existing MFA use cases such as S3 MFA Delete?
MFA-protected API access and S3 MFA Delete do not interact with each other. S3 MFA Delete currently does not support temporary security credentials. Instead, calls to the S3 MFA Delete API must be made using long-term access keys.

Q. Does MFA-protected API access work in the GovCloud (US) region?
Yes.

Q. Does MFA-protected API access work for federated users?
Customers cannot use MFA-protected API access to control access for federated users. The GetFederatedSession API does not accept MFA parameters. Since federated users can’t authenticate with AWS MFA devices, they are unable to access resources designated using MFA-protected API access.

''Pricing''
Q. What will I be charged for using AWS IAM?

IAM is a feature of your AWS account offered at no additional charge. You will be charged only for the use of other AWS services by your users.








https://aws.amazon.com/iam/faqs/
!IAM in Practice
!!How do I set up IAM for my organization?
AWS Identity and Access Management (IAM) is a powerful and flexible web service for controlling access to AWS resources. IAM enables customers to leverage the agility and efficiency of the cloud while maintaining secure control of their organization’s AWS infrastructure. IAM Administrators new to AWS can be sometimes overwhelmed by the options available as they face competing goals: securing the environment while quickly enabling new users to accomplish their jobs. Further complicating the task, the initial controls they implement must grow and adapt without disrupting productivity as the company navigates its path to the cloud

This webpage provides best practices and guidance to help IAM administrators quickly establish an initial set of controls that protect their infrastructure, empower users, and allow for growth and change in their organization’s use of AWS. The following sections assume a working knowledge of how to configure the IAM service.

!!Guidelines and Best Practices
While it can take weeks or months to lay out a full access-control strategy for an organization, there are some universal best practices to apply immediately to ensure security in the cloud. (For a complete list of IAM best practices, refer to the IAM User Guide.) These practices are important for both new organizations and established organizations with mature security processes, as they help ensure that the initial strategy stays relevant as teams and resources grow in size and complexity.

Create groups that reflect organizational roles, not technical commonality. Two users that require the same technical permissions to perform two different roles in an organization should be assigned to two different groups based on their roles rather than a single group based on the technical permission. The function of the roles will likely change over time, requiring different technical permissions. If the users are grouped by role, the permission changes can be highly targeted to the changing role–reducing the risk of inadvertently granting new privileges to extra users.
For example, a Data Warehouse Admin and a Data Scientist might both need the ability to launch an Amazon Redshift cluster. In time, the organization decides to move their analytics from Amazon Redshift to Amazon Elastic MapReduce (EMR). If both employees were mapped to a group called LaunchRedshiftCluster, then the IAM Administrator would have to either a) increase the permissions of all the users in the group or b) break the group in two and reassign the users manually (introducing the risk of error). With two groups, the IAM Administrator merely removes the Amazon Redshift privilege from the Data Scientist group and adds the Amazon EMR privilege; the Data Warehouse Admin group remains unchanged. The latter option decreases overall effort, minimizes the blast radius of the change and reduces risk of human error.
Have a documented process for removing unnecessary users and credentials. Know what steps are required to remove a user and write them down. Ideally, write a script to reduce the chance of error.
Enable MFA for privileged users. Use MFA to lock down administrative accounts (e.g., the root account, IAM administrators, and system administrators). MFA adds a something you have factor to the something you know factor of authentication, reducing the risk of a security breach. This can be implemented using a hardware device or a virtual MFA app.
Rotate credentials regularly. Credentials are secrets and the longer a secret exists the more likely it is to be compromised. Rotating credentials regularly helps mitigate this risk. Using Temporary Security Tokens, for example through Amazon Elastic Compute Cloud (Amazon EC2) roles removes the need for persistent credentials entirely.
Use managed policies rather than group or user policies. Although IAM policies can be directly associated with a user or group, use the newer capability of managed policies. Managed policies are reusable because they are decoupled from groups, making it easier to achieve technical commonality across different roles (as described above) without misusing groups.
Make policies granular. Add privileges to a group using multiple granular policies rather than one giant policy. For example, a Data Scientist might need access to write to Amazon Simple Storage Service (Amazon S3) to upload raw data, and also to Amazon EMR to launch clusters. Create a managed policy for S3 and a separate managed policy for Amazon EMR rather than combining the privileges into a single policy. This will help promote policy reuse and make it easier to manage permissions.
Use EC2 roles rather than access keys. An access key is a secret and must be stored in a location that is secure but also can be easily referenced during the bootstrap process. Once acquired, the key must be securely stored on the EC2 instance yet retrievable in order to access AWS resources. These conflicts create a perpetual risk in the DevOps process—a problem only exacerbated by the need to regularly rotate keys. EC2 Roles eliminate the need for an access key, and the underlying technology of Temporary Security Tokens eliminates the need for key rotation.
Use roles rather than user credentials to grant cross-account access. The safest and easiest way to grant access to users in different AWS accounts is to create a role with specific privileges and grant other accounts the right to assume that role. The administrator for the other account can then allow specific IAM users to switch to the role as necessary to use its permissions on a temporary basis. Using roles eliminates the responsibility of creating, managing, rotating, and securely delivering access keys for individual users from different accounts.
Use conditions to make policies more granular. IAM conditions provide a wealth of possibilities to refine policies. These include locking down administrator accounts to only work from a specific IP address range, limiting developer permissions to specific subnets, granting a permission for a specific time window, and much more. Learn the various conditions available and leverage them to create better policies.
!First Steps
The following sections describe how to start using IAM, including how to secure an AWS account, create IAM users, groups, and policies, and how to prepare for future growth and change in AWS use.

!!Securing the IAM Administrator Account
Before granting users the access they need, complete the following steps to move forward swiftly and securely.

Log in with the root account credentials, and configure baseline security settings according to the AWS Secure Initial Account Setup Solution Brief.
Use the IAM console to create a customized console login address. A custom console address will not only obscure the account number, but it will also provide a more user-friendly URL for users to use when accessing the AWS console.
Create a password policy.
Create an IAM Administrators group and assign it the managed policy IAMFullAccess.
Create an IAM Administrator user and add it to the IAM Administrators group.
Create a password for the IAM Administrator user.
Add virtual MFA to the IAM Administrator user.
Log out of the account, and log back in using the custom console URL and the new IAM Administrator credentials.
All new users and processes should now be set up using the new IAM Administrator account. Lock away the root account credentials and hardware device until needed to perform an account-level action that requires root credentials.

!!Creating Users and Groups
With new administrator credentials configured, it’s time to apply the general best practices. This sounds simple in concept, but can be challenging in actual execution–especially when starting out. Here are some steps to help get started.

Identify the first person to be granted access to AWS infrastructure. Explicitly state any associated business roles for that person. These business roles should be very granular and a person can fulfill several business roles.
Create an IAM group for each business role.
Identify the AWS permissions required to fulfill the tasks of each business role. Create managed policies for each task and assign them to the appropriate group.
Create an IAM user for the person and assign it to the groups representing the appropriate business roles Assign a user name and password to the account. If this person needs to use the CLI or other tools to access the AWS environment, create an access key as well.
Complete these steps for all subsequent users, mapping their roles to existing groups and creating new groups if needed. Watch for situations where the second user fills only part of an existing role and consider splitting the associated group into two groups.
The following table shows examples of how some typical users might be mapped to groups and permissions.

Employee

Group/Business Role

Permissions

 
Accounts Payable Clerk

Review Bills

Service: AWS Billing; Action: View*; ARN:*  (no conditions)

 
Comptroller

Review Bills

Service: AWS Billing; Action: View*; ARN:*  (no conditions)

 
Data Scientist

Run Data Experiment

Service: Amazon Elastic Map Reduce; Action: *; ARN: * (no conditions)

Service: Amazon S3; Action: Get*, List*, Put*; ARN: Input and output Buckets (no conditions)

 
Data Administrator

Prepare Data for Analysis

Service: Amazon S3; Action: Get*, List*, Put*; ARN: Input and output Buckets (no conditions)

 
Developer

Test Newly Developed Features

Service: Amazon EC2; Action: *Instances, *Volume, Describe*, CreateTags; Condition: Dev Subnets only

 
Tester

Run Test Scripts

Service: Amazon EC2; Action: *Instances, *Volume, Describe*, CreateTags; Condition: Test Subnets only

 
IT Manager

Review Deployed Infrastructure

Managed Policy: ReadOnlyAccess (Note – this policy will change as new services get added with no management needed)

 
It is common for there to be some iteration when initially setting up policies as it is not always immediately apparent which permissions are needed to accomplish all of a given task. If the IAM administrator and user are in the same room when the user first attempts to implement their use case, then the administrator can quickly add missing privileges to the policy as they are discovered.

!Looking to the Future
Customers’ perceptions of the cloud usually change drastically as they start to experience the significant changes it can bring to an IT environment. The guidelines outlined on this webpage will prepare an organization for common growth scenarios, such as:

Federating existing users – As the number of employees using AWS increases, many companies choose to base authentication on their internal user directory rather than replicate all employees in IAM. AWS supports this federation through SAML and OIDC. Following the best practices of creating groups that reflect business roles and managing the policies separately, an organization will be ready to map those groups to business roles as defined through SAML attributes and assign the same privileges appropriately.
Creating multiple accounts – Companies with extensive use of AWS often open multiple linked AWS accounts to help segregate billing, limit access, and minimize the blast radius of any security issues. If administrators follow best practices to group users by business role rather than by technology, they can more easily map each group to the appropriate account. Since polices are managed independently of groups and users, administrators can create roles for cross-account access using the same policies that they already created.

Ref: https://aws.amazon.com/answers/security/aws-iam-in-practice/
!Instance Features
Amazon EC2 instances provide a number of additional features to help you deploy, manage, and scale your applications.
!Burstable Performance Instances
Amazon EC2 allows you to choose between Fixed Performance Instances (e.g. M5, C5, and R5) and Burstable Performance Instances (e.g. T3). Burstable Performance Instances provide a baseline level of CPU performance with the ability to burst above the baseline.

T Unlimited instances can sustain high CPU performance for as long as a workload needs it. For most general-purpose workloads, T Unlimited instances will provide ample performance without any additional charges. The hourly T instance price automatically covers all interim spikes in usage when the average CPU utilization of a T instance is at or less than the baseline over a 24-hour window. If the instance needs to run at higher CPU utilization for a prolonged period, it can do so at a flat additional charge of 5 cents per vCPU-hour.

T instances’ baseline performance and ability to burst are governed by CPU Credits. Each T instance receives CPU Credits continuously, the rate of which depends on the instance size. T instances accrue CPU Credits when they are idle, and use CPU credits when they are active. A CPU Credit provides the performance of a full CPU core for one minute.

For example, a t2.small instance receives credits continuously at a rate of 12 CPU Credits per hour. This capability provides baseline performance equivalent to 20% of a CPU core (20% x 60 mins = 12 mins). If the instance does not use the credits it receives, they are stored in its CPU Credit balance up to a maximum of 288 CPU Credits. When the t2.small instance needs to burst to more than 20% of a core, it draws from its CPU Credit balance to handle this surge automatically.

With T2 Unlimited enabled, the t2.small instance can burst above the baseline even after its CPU Credit balance is drawn down to zero. For a vast majority of general purpose workloads where the average CPU utilization is at or below the baseline performance, the basic hourly price for t2.small covers all CPU bursts. If the instance happens to run at an average 25% CPU utilization (5% above baseline) over a period of 24 hours after its CPU Credit balance is drawn to zero, it will be charged an additional 6 cents (5 cents/vCPU-hour x 1 vCPU x 5% x 24 hours).

Many applications such as web servers, developer environments and small databases don’t need consistently high levels of CPU, but benefit significantly from having full access to very fast CPUs when they need them. T instances are engineered specifically for these use cases. If you need consistently high CPU performance for applications such as video encoding, high volume websites or HPC applications, we recommend you use Fixed Performance Instances. T instances are designed to perform as if they have dedicated high speed Intel cores available when your application really needs CPU performance, while protecting you from the variable performance or other common side-effects you might typically see from over-subscription in other environments.
!Multiple Storage Options
Amazon EC2 allows you to choose between multiple storage options based on your requirements. Amazon EBS is a durable, block-level storage volume that you can attach to a single, running Amazon EC2 instance. You can use Amazon EBS as a primary storage device for data that requires frequent and granular updates. For example, Amazon EBS is the recommended storage option when you run a database on Amazon EC2. Amazon EBS volumes persist independently from the running life of an Amazon EC2 instance. Once a volume is attached to an instance you can use it like any other physical hard drive. Amazon EBS provides three volume types to best meet the needs of your workloads: General Purpose (SSD), Provisioned IOPS (SSD), and Magnetic. General Purpose (SSD) is the new, SSD-backed, general purpose EBS volume type that we recommend as the default choice for customers. General Purpose (SSD) volumes are suitable for a broad range of workloads, including small to medium sized databases, development and test environments, and boot volumes. Provisioned IOPS (SSD) volumes offer storage with consistent and low-latency performance, and are designed for I/O intensive applications such as large relational or NoSQL databases. Magnetic volumes provide the lowest cost per gigabyte of all EBS volume types. Magnetic volumes are ideal for workloads where data is accessed infrequently, and applications where the lowest storage cost is important.

Many Amazon EC2 instances can also include storage from disks that are physically attached to the host computer. This disk storage is referred to as instance store. Instance store provides temporary block-level storage for Amazon EC2 instances. The data on an instance store volume persists only during the life of the associated Amazon EC2 instance.

In addition to block level storage via Amazon EBS or instance store, you can also use Amazon S3 for highly durable, highly available object storage. Learn more about Amazon EC2 storage options from the Amazon EC2 documentation.
!EBS-optimized Instances
For an additional, low, hourly fee, customers can launch selected Amazon EC2 instances types as EBS-optimized instances. For C5, C4, M5, M4, P3, P2, G3, and D2 instances, this feature is enabled by default at no additional cost. EBS-optimized instances enable EC2 instances to fully use the IOPS provisioned on an EBS volume. EBS-optimized instances deliver dedicated throughput between Amazon EC2 and Amazon EBS, with options between 500 and 4,000 Megabits per second (Mbps) depending on the instance type used. The dedicated throughput minimizes contention between Amazon EBS I/O and other traffic from your EC2 instance, providing the best performance for your EBS volumes. EBS-optimized instances are designed for use with both Standard and Provisioned IOPS Amazon EBS volumes. When attached to EBS-optimized instances, Provisioned IOPS volumes can achieve single digit millisecond latencies and are designed to deliver within 10% of the provisioned IOPS performance 99.9% of the time. We recommend using Provisioned IOPS volumes with EBS-optimized instances or instances that support cluster networking for applications with high storage I/O requirements.

!Cluster Networking
Select EC2 instances support cluster networking when launched into a common cluster placement group. A cluster placement group provides low-latency networking between all instances in the cluster. The bandwidth an EC2 instance can utilize depends on the instance type and its networking performance specification. Inter instance traffic within the same region can utilize up to 5 Gbps for single-flow and up to 25 Gbps for multi-flow traffic in each direction (full duplex). Traffic to and from S3 buckets in the same region can also utilize all available instance aggregate bandwidth. When launched in a placement group, instances can utilize up to 10 Gbps for single-flow traffic and up to 25 Gbps for multi-flow traffic. Network traffic to the Internet is limited to 5 Gbps (full duplex). Cluster networking is ideal for high performance analytics systems and many science and engineering applications, especially those using the MPI library standard for parallel programming.


!Intel Processor Features
intelinside_150
Amazon EC2 instances that feature an Intel processor may provide access to the following processor features:

*''Intel AES New Instructions (AES-NI): ''Intel AES-NI encryption instruction set improves upon the original Advanced Encryption Standard (AES) algorithm to provide faster data protection and greater security. All current generation EC2 instances support this processor feature.
*''Intel Advanced Vector Extensions (Intel AVX, Intel AVX2 and Intel AVX-512):'' Intel AVX and Intel AVX2 are 256-bit and Intel AVX-512 is a 512-bit instruction set extensions designed for applications that are Floating Point (FP) intensive. Intel AVX instructions improve performance for applications like image and audio/video processing, scientific simulations, financial analytics, and 3D modeling and analysis. These features are only available on instances launched with HVM AMIs.
*''Intel Turbo Boost Technology:'' Intel Turbo Boost Technology provides more performance when needed. The processor is able to automatically run cores faster than the base operating frequency to help you get more done faster.
Not all processor features are available in all instance types, check out the instance type matrix for more detailed information on which features are available from which instance types.

!Measuring Instance Performance
Why should you measure instance performance?
Amazon EC2 allows you to provision a variety of instances types, which provide different combinations of CPU, memory, disk, and networking. Launching new instances and running tests in parallel is easy, and we recommend measuring the performance of applications to identify appropriate instance types and validate application architecture. We also recommend rigorous load/scale testing to ensure that your applications can scale as you intend.
!!Considerations for Amazon EC2 performance evaluation
Amazon EC2 provides you with a large number of options across ten different instance types, each with one or more size options, organized into six distinct instance families optimized for different types of applications. We recommend that you assess the requirements of your applications and select the appropriate instance family as a starting point for application performance testing. You should start evaluating the performance of your applications by (a) identifying how your application needs compare to different instance families (e.g. is the application compute-bound, memory-bound, etc.?), and (b) sizing your workload to identify the appropriate instance size. There is no substitute for measuring the performance of your full application since application performance can be impacted by the underlying infrastructure or by software and architectural limitations. We recommend application-level testing, including the use of application profiling and load testing tools and services.
/***
|''Name:''|LoadRemoteFileThroughProxy (previous LoadRemoteFileHijack)|
|''Description:''|When the TiddlyWiki file is located on the web (view over http) the content of [[SiteProxy]] tiddler is added in front of the file url. If [[SiteProxy]] does not exist "/proxy/" is added. |
|''Version:''|1.1.0|
|''Date:''|mar 17, 2007|
|''Source:''|http://tiddlywiki.bidix.info/#LoadRemoteFileHijack|
|''Author:''|BidiX (BidiX (at) bidix (dot) info)|
|''License:''|[[BSD open source license|http://tiddlywiki.bidix.info/#%5B%5BBSD%20open%20source%20license%5D%5D ]]|
|''~CoreVersion:''|2.2.0|
***/
//{{{
version.extensions.LoadRemoteFileThroughProxy = {
 major: 1, minor: 1, revision: 0, 
 date: new Date("mar 17, 2007"), 
 source: "http://tiddlywiki.bidix.info/#LoadRemoteFileThroughProxy"};

if (!window.bidix) window.bidix = {}; // bidix namespace
if (!bidix.core) bidix.core = {};

bidix.core.loadRemoteFile = loadRemoteFile;
loadRemoteFile = function(url,callback,params)
{
 if ((document.location.toString().substr(0,4) == "http") && (url.substr(0,4) == "http")){ 
 url = store.getTiddlerText("SiteProxy", "/proxy/") + url;
 }
 return bidix.core.loadRemoteFile(url,callback,params);
}
//}}}
M4 instances provide a balance of compute, memory, and network resources, and it is a good choice for many applications.

Features:

2.3 GHz Intel Xeon® E5-2686 v4 (Broadwell) processors or 2.4 GHz Intel Xeon® E5-2676 v3 (Haswell) processors
EBS-optimized by default at no additional cost
Support for Enhanced Networking
Balance of compute, memory, and network resources
Model	vCPU*	Mem (GiB)	Storage	Dedicated EBS Bandwidth (Mbps)	Network Performance
m4.large	2	8	EBS-only	450	Moderate
m4.xlarge	4	16	EBS-only	750	High
m4.2xlarge	8	32	EBS-only	1,000	High
m4.4xlarge	16	64	EBS-only	2,000	High
m4.10xlarge	40	160	EBS-only	4,000	10 Gigabit
m4.16xlarge	64	256	EBS-only	10,000	25 Gigabit
All instances have the following specs:

2.4 GHz Intel Xeon E5-2676 v3** Processor
Intel AVX†, Intel AVX2†, Intel Turbo
EBS Optimized
Enhanced Networking†
Use Cases

Small and mid-size databases, data processing tasks that require additional memory, caching fleets, and for running backend servers for SAP, Microsoft SharePoint, cluster computing, and other enterprise applications.
M5 instances are the latest generation of General Purpose Instances. This family provides a balance of compute, memory, and network resources, and is a good choice for many applications.

''Features:''

*Up to 3.1 GHz Intel Xeon® Platinum 8175 processors with new Intel Advanced Vector Extension (AVX-512) instruction set
*New larger instance size, m5.24xlarge, offering 96 vCPUs and 384 GiB of memory
*Up to 25 Gbps network bandwidth using Enhanced Networking
*Requires HVM AMIs that include drivers for ENA and NVMe
*Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
*Instance storage offered via EBS or NVMe SSDs that are physically attached to the host server
*With M5d instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the M5 instance
*New 8xlarge and 16xlarge sizes now available.
|Model	|vCPU	|Memory (GiB)	|Instance Storage (GiB)	|Network Bandwidth (Gbps)|EBS Bandwidth (Mbps)|
|m5.large	|2	|8|EBS-Only	|Up to 10	|Up to 3,500|
|m5.xlarge	|4	|16	|EBS-Only	|Up to 10	|Up to 3,500|
|m5.2xlarge	|8	|32	|EBS-Only	|Up to 10	|Up to 3,500|
|m5.4xlarge	|16	|64	|EBS-Only	|Up to 10	|3,500|
|m5.8xlarge	|32	|128	|EBS Only	|10	|5,000|
|m5.12xlarge	|48	|192	|EBS-Only	|10	|7,000|
|m5.16xlarge	|64	|256	|EBS Only	|20	|10,000|
|m5.24xlarge	|96	|384	|EBS-Only	|25	|14,000|
|m5.metal	|96*	|384	|EBS-Only	|25	|14,000|
|m5d.large	|2	|8	|1 x 75 NVMe SSD	|Up to 10	|Up to 3,500|
|m5d.xlarge	|4	|16	|1 x 150 NVMe SSD	|Up to 10	|Up to 3,500|
|m5d.2xlarge	|8	|32	|1 x 300 NVMe SSD	|Up to 10	|Up to 3,500|
|m5d.4xlarge	|16	|64	|2 x 300 NVMe SSD	|Up to 10	|3,500|
|m5d.8xlarge	|32	|128	|2 x 600 NVMe SSD	|10	|5,000|
|m5d.12xlarge	|48	|192	|2 x 900 NVMe SSD	|10	|7,000|
|m5d.16xlarge	|64	|256	|4 x 600 NVMe SSD	|20	|10,000|
|m5d.24xlarge	|96	|384	|4 x 900 NVMe SSD	|25	|14,000|
|m5d.metal	|96*	|384	|4 x 900 NVMe SSD	|25	|14,000|
* m5.metal and m5d.metal provide 96 logical processors on 48 physical cores

All instances have the following specs:

*Up to 3.1 GHz Intel Xeon Platinum Processor
*Intel AVX†, Intel AVX2†, Intel Turbo
*EBS Optimized
*Enhanced Networking†


''Use Cases''

Small and mid-size databases, data processing tasks that require additional memory, caching fleets, and for running backend servers for SAP, Microsoft SharePoint, cluster computing, and other enterprise applications
M5a instances are the latest generation of General Purpose Instances. This family provides a balance of compute, memory, and network resources, and is a good choice for many applications.

Features:

AMD EPYC 7000 series processors with an all core turbo clock speed of 2.5 GHz
New larger instance size, m5.24xlarge, offering 96 vCPUs and 384 GiB of memory
Up to 20 Gbps network bandwidth using Enhanced Networking
Requires HVM AMIs that include drivers for ENA and NVMe
Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Instance storage offered via EBS or NVMe SSDs that are physically attached to the host server
With M5ad instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the M5a instance
New 8xlarge and 16xlarge sizes now available.
Model	vCPU	Memory (GiB)
Instance Storage (GiB)
Network Bandwidth (Gbps)	EBS Bandwidth (Mbps)
m5a.large	2	8	EBS-Only	Up to 10	Up to 2,120
m5a.xlarge	4	16	EBS-Only	Up to 10	Up to 2,120
m5a.2xlarge	8	32	EBS-Only	Up to 10	Up to 2,120
m5a.4xlarge	16	64	EBS-Only	Up to 10	2,120
m5a.8xlarge	32	128	EBS Only	Up to 10	3,500
m5a.12xlarge	48	192	EBS-Only	10	5,000
m5a.16xlarge	64	256	EBS Only	12	7,000
m5a.24xlarge	96	384	EBS-Only	20	10,000
m5ad.large	2	8	1 x 75 NVMe SSD	Up to 10	Up to 2,120
m5ad.xlarge	4	16	1 x 150 NVMe SSD	Up to 10	Up to 2,120
m5ad.2xlarge	8	32	1 x 300 NVMe SSD	Up to 10	Up to 2,120
m5ad.4xlarge	16	64	2 x 300 NVMe SSD	Up to 10	2,120
m5ad.12xlarge	48	192	2 x 900 NVMe SSD	10	5,000
m5ad.24xlarge	96	384	4 x 900 NVMe SSD	20	10,000
All instances have the following specs:

2.5 GHz AMD EPYC 7000 series processors
EBS Optimized
Enhanced Networking†
Use Cases

Small and mid-size databases, data processing tasks that require additional memory, caching fleets, and for running backend servers for SAP, Microsoft SharePoint, cluster computing, and other enterprise applications
[[contents]]
P2 instances are intended for general-purpose GPU compute applications.
Features:

High frequency Intel Xeon E5-2686 v4 (Broadwell) processors
High-performance NVIDIA K80 GPUs, each with 2,496 parallel processing cores and 12GiB of GPU memory
Supports GPUDirect™ for peer-to-peer GPU communications
Provides Enhanced Networking using Elastic Network Adapter (ENA) with up to 25 Gbps of aggregate network bandwidth within a Placement Group
EBS-optimized by default at no additional cost
Model	GPUs	vCPU	Mem (GiB)	GPU Memory (GiB)	Network Performance
p2.xlarge	1	4	61	12	High
p2.8xlarge	8	32	488	96	10 Gigabit
p2.16xlarge	16	64	732	192	25 Gigabit
All instances have the following specs:

2.3 GHz (base) and 2.7 GHz (turbo) Intel Xeon E5-2686 v4 Processor
Intel AVX, Intel AVX2, Intel Turbo
EBS Optimized
Enhanced Networking†
Use Cases

Machine learning, high performance databases, computational fluid dynamics, computational finance, seismic analysis, molecular modeling, genomics, rendering, and other server-side GPU compute workloads.
/***
|''Name:''|PasswordOptionPlugin|
|''Description:''|Extends TiddlyWiki options with non encrypted password option.|
|''Version:''|1.0.2|
|''Date:''|Apr 19, 2007|
|''Source:''|http://tiddlywiki.bidix.info/#PasswordOptionPlugin|
|''Author:''|BidiX (BidiX (at) bidix (dot) info)|
|''License:''|[[BSD open source license|http://tiddlywiki.bidix.info/#%5B%5BBSD%20open%20source%20license%5D%5D ]]|
|''~CoreVersion:''|2.2.0 (Beta 5)|
***/
//{{{
version.extensions.PasswordOptionPlugin = {
	major: 1, minor: 0, revision: 2, 
	date: new Date("Apr 19, 2007"),
	source: 'http://tiddlywiki.bidix.info/#PasswordOptionPlugin',
	author: 'BidiX (BidiX (at) bidix (dot) info',
	license: '[[BSD open source license|http://tiddlywiki.bidix.info/#%5B%5BBSD%20open%20source%20license%5D%5D]]',
	coreVersion: '2.2.0 (Beta 5)'
};

config.macros.option.passwordCheckboxLabel = "Save this password on this computer";
config.macros.option.passwordInputType = "password"; // password | text
setStylesheet(".pasOptionInput {width: 11em;}\n","passwordInputTypeStyle");

merge(config.macros.option.types, {
	'pas': {
		elementType: "input",
		valueField: "value",
		eventName: "onkeyup",
		className: "pasOptionInput",
		typeValue: config.macros.option.passwordInputType,
		create: function(place,type,opt,className,desc) {
			// password field
			config.macros.option.genericCreate(place,'pas',opt,className,desc);
			// checkbox linked with this password "save this password on this computer"
			config.macros.option.genericCreate(place,'chk','chk'+opt,className,desc);			
			// text savePasswordCheckboxLabel
			place.appendChild(document.createTextNode(config.macros.option.passwordCheckboxLabel));
		},
		onChange: config.macros.option.genericOnChange
	}
});

merge(config.optionHandlers['chk'], {
	get: function(name) {
		// is there an option linked with this chk ?
		var opt = name.substr(3);
		if (config.options[opt]) 
			saveOptionCookie(opt);
		return config.options[name] ? "true" : "false";
	}
});

merge(config.optionHandlers, {
	'pas': {
 		get: function(name) {
			if (config.options["chk"+name]) {
				return encodeCookie(config.options[name].toString());
			} else {
				return "";
			}
		},
		set: function(name,value) {config.options[name] = decodeCookie(value);}
	}
});

// need to reload options to load passwordOptions
loadOptionsCookie();

/*
if (!config.options['pasPassword'])
	config.options['pasPassword'] = '';

merge(config.optionsDesc,{
		pasPassword: "Test password"
	});
*/
//}}}
''Q1.1:'' AWS EC2 can be used for 
''A: a.''  Hosting a dynamic website with content in a database
''b.'' Storing files
''c. '' 3rd party applications
''d. '' Running a windows .net application
''e.'' A database server running Microsoft SQL server
''f.'' All of the above.  Correct

''Q1.2:'' Which of the following will occur when an EC2 instance in a VPC (Virtual Private Cloud) with an associated Elastic IP is stopped and started? (Choose 2 answers)
''A: 1. '' During the time the instance is stopped your account will be charged for EIP Correct
''2. '' The public and private IP addresses will remain the same Correct

''Q1.3: '' In the basic monitoring package for EC2, Amazon CloudWatch provides the following metrics:
''A: '' Hypervisor visible metrics such as CPU utilization 

''Q1.4: '' You have a distributed application that periodically processes large volumes of data across multiple Amazon EC2 Instances. The application is designed to recover gracefully from Amazon EC2 instance failures. You are required to accomplish this task in the most reliable and efficient way.
''A: '' On-Demand instances

''Q1.5:'' A corporate customer has a live website, running a CMS - that has its own database. Everything will be installed on the EC2 instance. It will use an EBS volume for storage and boot. The specifications require 4 CPU cores and 8 gb of RAM, which instance type would you recommend? You need to consider cost, reliability, and fit with the rest of their fleet.
''A: '' c5.xlarge Correct

''Q2.1:'' Your customer has a database that is not supported by RDS. they have estimated that they need 5000 IOPS
and 250gb of storage for the database. What would you recommend?  ''A:''  EBS - provisioned IOPS

''Q2.2:''  Which of the following can be used as a backup and restore solutions within AWS? (choose multiple) ''A: 1.''
AWS Elastic Block Store      ''2. '' AWS Storage Gateway   ''3.''   AWS S3

''Q2.3: ''  You have some large files that you would like customers to access securely. You have a web based application
that can check the customers access to the files. Once they have been approved access, you would like the URL to expire within 1 hour. Which recommendation best fits this requirement.
''A. :'' Use custom code to generate sign S3 URLs

''Q2.4:''  My financials Pty Ltd stores all its billing invoices in S3 buckets. Sarah says one of her colleague has  deleted an invoice stored in the cloud. Sarah thinks it’s lost forever, and contacts her cloud consultant. Garry, the AWS certified consultant, says her can recover the data easily. Choose the correct answer(s) below.
''A: 1'' Garry knows that  versioning on S3 bucket has been enabled, and can be used to recover deleted files Correct   ''2. '' Garry is confident the file is still there as MFA delete access has been enabled Correct

''Q2.5:'' What is the maximum file size on S3?   ''A: '' 5 terabytes 

''Q3.1:'' RDS automated backups retention period is set at default to  ''A:''7days

''Q3.2:'' In a multi AZ RDS deployment which is true  ''A:'' It provides no increased performance

''Q3.3'' Your organisation is developing an auction system. It’s found the database, which is RDS mysql, is causing
delays during peak periods during beta testing. Your lead developer noted most values rarely change. You’re
concerned this will be a much greater issue when the system is released, what do you recommend: (select
two) '' A: 1.'' Convert to RDS Aurora   ''2.'' Recode and allow caching of recent values in RDS Elasticache

''Q3.4:'' You should launch RDS instances        ''A:'' In a VPC, using a private subnet

''Q3.5:'' Which of the following services give you full administrative access upon launch?  ''A: 1.'' MapReduce     ''2.'' EC2     ''3.'' Elastic Beanstalk

''Q4.1'' Which of the following statements are true for AWS shield? (Choose 3)
''A 1.'' Provides TCP layer 3 and layer 4 protection ''2.''  AWS shield standard is included by default ''3.''  Defends against attacks like SYN/UDP floods

''Q4.2:'' Which are true for AWS WAF ? (Choose 3)
''A 1.''  Provides tcp layer 7 protection  ''2. '' Can help you defend your applications  ''3. '' Offer rate based attack defence

''Q4.3: '' Which tool can be used to defends your EC2 instance and operating system? (choose 2)
''A 1. ''  Security groups     ''2. '' Network Access Controls Lists (NACL)

''Q4.4: '' Amazon recommended for the AWS root account: (choose 2)
''A 1. ''  Enable MFA for the root account    ''2. '' Delete the access keys for the root account

''Q4.5: '' Using AWS IAM you can? (choose 2)
''A 1. '' Create policies for AWS resources  Correct    ''2. ''  Add users to groups Correct

R4 instances are optimized for memory-intensive applications and offer better price per GiB of RAM than R3.

Features:

High Frequency Intel Xeon E5-2686 v4 (Broadwell) processors
DDR4 Memory
Support for Enhanced Networking
Model	vCPU	Mem (GiB)	Storage	Networking Performance (Gbps)
r4.large	2	15.25	EBS-Only	Up to 10
r4.xlarge	4	30.5	EBS-Only	Up to 10
r4.2xlarge	8	61	EBS-Only	Up to 10
r4.4xlarge	16	122	EBS-Only	Up to 10
r4.8xlarge	32	244	EBS-Only	10
r4.16xlarge	64	488	EBS-Only	25
All instances have the following specs:

2.3 GHz Intel Xeon E5-2686 v4 Processor
Intel AVX†, Intel AVX2†, Intel Turbo
EBS Optimized
Enhanced Networking†
Use Cases

High performance databases, data mining & analysis, in-memory databases, distributed web scale in-memory caches, applications performing real-time processing of unstructured big data, Hadoop/Spark clusters, and other enterprise applications.
R5 instances deliver 5% additional memory per vCPU than R4 and the largest size provides 768 GiB of memory. In addition, R5 instances deliver a 10% price per GiB improvement and a ~20% increased CPU performance over R4.

Features:

Up to 3.1 GHz Intel Xeon® Platinum 8175 processors with new Intel Advanced Vector Extension (AVX-512) instruction set
Up to 768 GiB of memory per instance
Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
With R5d instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the R5 instance
New 8xlarge and 16xlarge sizes now available.
Model	vCPU	Memory (GiB)	Instance Storage (GiB)	Networking Performance (Gbps)	EBS Bandwidth (Mbps)
r5.large	2	16	EBS-Only	up to 10	up to 3,500
r5.xlarge	4	32	EBS-Only	up to 10	up to 3,500
r5.2xlarge	8	64	EBS-Only	up to 10	up to 3,500
r5.4xlarge	16	128	EBS-Only	up to 10	3,500
r5.8xlarge	32	256	EBS-Only	10	5,000
r5.12xlarge	48	384	EBS-Only	10	7,000
r5.16xlarge	64	512	EBS Only	20	10,000
r5.24xlarge	96	768	EBS-Only	25	14,000
r5.metal	96*	768	EBS-Only	25	14,000
r5d.large	2	16	1 x 75 NVMe SSD	up to 10	up to 3,500
r5d.xlarge	4	32	1 x 150 NVMe SSD	up to 10	up to 3,500
r5d.2xlarge	8	64	1 x 300 NVMe SSD	up to 10	up to 3,500
r5d.4xlarge	16	128	2 x 300 NVMe SSD	up to 10	3,500
r5d.8xlarge	32	256	2 x 600 NVMe SSD	10	5,000
r5d.12xlarge	48	384	2 x 900 NVMe SSD	10	7,000
r5d.16xlarge	64	512	4 x 600 NVMe SSD	20	10,000
r5d.24xlarge	96	768	4 x 900 NVMe SSD	25	14,000
r5d.metal	96*	768	4 x 900 NVMe SSD	25	14,000
*r5.metal and r5d.metal provide 96 logical processors on 48 physical cores

All instances have the following specs:

Up to 3.1 GHz Intel Xeon Platinum Processor
Intel AVX†, Intel AVX2†, Intel Turbo
EBS Optimized
Enhanced Networking†
Use Cases

R5 instances are well suited for memory intensive applications such as high performance databases, distributed web scale in-memory caches, mid-size in-memory databases, real time big data analytics, and other enterprise applications.
R5a instances deliver 5% additional memory per vCPU than R4 and the largest size provides 768 GiB of memory. In addition, R5a instances deliver a 10% price per GiB improvement and a ~20% increased CPU performance over R4.

Features:

Up to 768 GiB of memory per instance
AMD EPYC 7000 series processors with an all core turbo clock speed of 2.5 GHz
Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
With R5ad instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the R5a instance
New 8xlarge and 16xlarge sizes now available.
Model	vCPU	Memory (GiB)	Instance Storage (GiB)	Networking Performance (Gbps)	EBS Bandwidth (Mbps)
r5a.large	2	16
EBS-Only	Up to 10 
up to 2,120
r5a.xlarge	4	32	EBS-Only	Up to 10 	up to 2,120
r5a.2xlarge	8	64	EBS-Only	Up to 10 	up to 2,120
r5a.4xlarge	16	128	EBS-Only	Up to 10 	2,120
r5a.8xlarge	32	256	EBS-Only	Up to 10	3,500
r5a.12xlarge	48	384	EBS-Only	10	5,000
r5a.16xlarge	64	512	EBS-Only	12	7,000
r5a.24.xlarge	96	768	EBS-Only	20	10,000
r5ad.large	2	16	1 x 75 NVMe SSD	Up to 10
up to 2,120
r5ad.xlarge	4	32	1 x 150 NVMe SSD	Up to 10
up to 2,120
r5ad.2xlarge	8	64	1 x 300 NVMe SSD	Up to 10
up to 2,120
r5ad.4xlarge	16	128	2 x 300 NVMe SSD	Up to 10
2,120
r5ad.12xlarge	48	384	2 x 900 NVMe SSD	10	5,000
r5ad.24.xlarge	96	768	4 x 900 NVMe SSD	20	10,000
All instances have the following specs:

2.5 GHz AMD EPYC 7000 series processors
EBS Optimized
Enhanced Networking†
Use Cases

R5a instances are well suited for memory intensive applications such as high performance databases, distributed web scale in-memory caches, mid-size in-memory databases, real time big data analytics, and other enterprise applications.
Certified Solutions Architect
AWS 
T2 instances are Burstable Performance Instances that provide a baseline level of CPU performance with the ability to burst above the baseline.

T2 Unlimited instances can sustain high CPU performance for as long as a workload needs it. For most general-purpose workloads, T2 Unlimited instances will provide ample performance without any additional charges. If the instance needs to run at higher CPU utilization for a prolonged period, it can also do so at a flat additional charge of 5 cents per vCPU-hour.

The baseline performance and ability to burst are governed by CPU Credits. T2 instances receive CPU Credits continuously at a set rate depending on the instance size, accumulating CPU Credits when they are idle, and consuming CPU credits when they are active. T2 instances are a good choice for a variety of general-purpose workloads including micro-services, low-latency interactive applications, small and medium databases, virtual desktops, development, build and stage environments, code repositories, and product prototypes. For more information see Burstable Performance Instances.

''Features:''

*High frequency Intel Xeon processors
*Burstable CPU, governed by CPU Credits, and consistent baseline performance
*Lowest-cost general purpose instance type, and Free Tier eligible*
*Balance of compute, memory, and network resources
* t2.micro only. If configured as T2 Unlimited, charges may apply if average CPU utilization exceeds the baseline of the instance. See documentation for more details.
|Model	|vCPU*	|CPU Credits / hour |Mem (GiB)	 |Storage|Network Performance|
|t2.nano	|1	|3	|0.5	|EBS-Only|	Low|
|t2.micro	|1	|6	|1	|EBS-Only|Low to Moderate|
|t2.small	|1	|12	|2     |EBS-Only|Low to Moderate|
|t2.medium|2	|24	|4	|EBS-Only|Low to Moderate|
|t2.large	|2	|36   |8	|EBS-Only|	Low to Moderate|
|t2.xlarge	|4	|54	|16	|EBS-Only|	Moderate|
|t2.2xlarge|8	|81	|32	|EBS-Only|	Moderate|
All instances have the following specs:

*Intel AVX†, Intel Turbo†
*t2.nano, t2.micro, t2.small, t2.medium have up to 3.3 GHz Intel Scalable Processor
*t2.large, t2.xlarge, and t2.2xlarge have up to 3.0 GHz Intel Scalable Processor
''Use Cases''

Websites and web applications, development environments, build servers, code repositories, micro services, test and staging environments, and line of business applications.  
T3 instances are the next generation burstable general-purpose instance type that provide a baseline level of CPU performance with the ability to burst CPU usage at any time for as long as required. T3 instances offer a balance of compute, memory, and network resources and are designed for applications with moderate CPU usage that experience temporary spikes in use.

T3 instances accumulate CPU credits when a workload is operating below baseline threshold. Each earned CPU credit provides the T3 instance the opportunity to burst with the performance of a full CPU core for one minute when needed. T3 instances can burst at any time for as long as required in Unlimited mode.

Features:

*Burstable CPU, governed by CPU Credits, and consistent baseline performance
*Unlimited mode by default to ensure performance during peak periods and Standard mode option for a predictable monthly cost
*Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
*AWS Nitro System and high frequency Intel Xeon Scalable processors result in up to a 30% price performance improvement over T2 instances
|Model	|vCPU*	|CPU Credits/hour	|Mem (GiB)	|Storage	|Network Performance (Gbps)|
|t3.nano|	2|6           |0.5	|EBS-Only	|Up to 5|
|t3.micro	|2|12|1	|EBS-Only|Up to 5|
|t3.small	|2|24|2	|EBS-Only|Up to 5|
|t3.medium	|2	|24|4	|EBS-Only|Up to 5|
|t3.large	|2	|36|8	|EBS-Only	|Up to 5|
|t3.xlarge	|4	|96|16	|EBS-Only	|Up to 5|
|t3.2xlarge	|8	|192|32	|EBS-Only	|Up to 5|
All instances have the following specs:

*2.5 GHz Intel Scalable Processor
*Intel AVX†, Intel AVX2†, Intel Turbo
*EBS Optimized
*Enhanced Networking†

''Use Cases:''

Micro-services, low-latency interactive applications, small and medium databases, virtual desktops, development environments, code repositories, and business-critical applications
T3a instances are the next generation burstable general-purpose instance type that provide a baseline level of CPU performance with the ability to burst CPU usage at any time for as long as required. T3a instances offer a balance of compute, memory, and network resources and are designed for applications with moderate CPU usage that experience temporary spikes in use.

T3a instances accumulate CPU credits when a workload is operating below baseline threshold. Each earned CPU credit provides the T3a instance the opportunity to burst with the performance of a full CPU core for one minute when needed. T3a instances can burst at any time for as long as required in Unlimited mode.

''Features:''

*Burstable CPU, governed by CPU Credits, and consistent baseline performance
*Unlimited mode by default to ensure performance during peak periods and Standard mode option for a predictable monthly cost
*Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
*T3a features 2.5 GHz AMD EPYC 7000 series processors that offer customers a 10% cost savings over T3 instances
|Model	|vCPU*	|CPU Credits/hour	|Mem (GiB)	|Storage	|Network Performance (Gbps)|
|t3a.nano	|2|6|0.5	|EBS-Only	|Up to 5|
|t3a.micro	|2|12|1	|EBS-Only|Up to 5|
|t3a.small	|2|24|2	|EBS-Only|Up to 5|
|t3a.medium|	2|	24|4	|EBS-Only|Up to 5|
|t3a.large|	2|	36|8	|EBS-Only	|Up to 5|
|t3a.xlarge|	4|	96|16	|EBS-Only	|Up to 5|
|t3a.2xlarge|	8|	192|32	|EBS-Only	|Up to 5|
/***
Description: Contains the stuff you need to use Tiddlyspot
Note, you also need UploadPlugin, PasswordOptionPlugin and LoadRemoteFileThroughProxy
from http://tiddlywiki.bidix.info for a complete working Tiddlyspot site.
***/
//{{{

// edit this if you are migrating sites or retrofitting an existing TW
config.tiddlyspotSiteId = 'ksaws';

// make it so you can by default see edit controls via http
config.options.chkHttpReadOnly = false;
window.readOnly = false; // make sure of it (for tw 2.2)
window.showBackstage = true; // show backstage too

// disable autosave in d3
if (window.location.protocol != "file:")
	config.options.chkGTDLazyAutoSave = false;

// tweak shadow tiddlers to add upload button, password entry box etc
with (config.shadowTiddlers) {
	SiteUrl = 'http://'+config.tiddlyspotSiteId+'.tiddlyspot.com';
	SideBarOptions = SideBarOptions.replace(/(<<saveChanges>>)/,"$1<<tiddler TspotSidebar>>");
	OptionsPanel = OptionsPanel.replace(/^/,"<<tiddler TspotOptions>>");
	DefaultTiddlers = DefaultTiddlers.replace(/^/,"[[WelcomeToTiddlyspot]] ");
	MainMenu = MainMenu.replace(/^/,"[[WelcomeToTiddlyspot]] ");
}

// create some shadow tiddler content
merge(config.shadowTiddlers,{

'TspotControls':[
 "| tiddlyspot password:|<<option pasUploadPassword>>|",
 "| site management:|<<upload http://" + config.tiddlyspotSiteId + ".tiddlyspot.com/store.cgi index.html . .  " + config.tiddlyspotSiteId + ">>//(requires tiddlyspot password)//<br>[[control panel|http://" + config.tiddlyspotSiteId + ".tiddlyspot.com/controlpanel]], [[download (go offline)|http://" + config.tiddlyspotSiteId + ".tiddlyspot.com/download]]|",
 "| links:|[[tiddlyspot.com|http://tiddlyspot.com/]], [[FAQs|http://faq.tiddlyspot.com/]], [[blog|http://tiddlyspot.blogspot.com/]], email [[support|mailto:support@tiddlyspot.com]] & [[feedback|mailto:feedback@tiddlyspot.com]], [[donate|http://tiddlyspot.com/?page=donate]]|"
].join("\n"),

'TspotOptions':[
 "tiddlyspot password:",
 "<<option pasUploadPassword>>",
 ""
].join("\n"),

'TspotSidebar':[
 "<<upload http://" + config.tiddlyspotSiteId + ".tiddlyspot.com/store.cgi index.html . .  " + config.tiddlyspotSiteId + ">><html><a href='http://" + config.tiddlyspotSiteId + ".tiddlyspot.com/download' class='button'>download</a></html>"
].join("\n"),

'WelcomeToTiddlyspot':[
 "This document is a ~TiddlyWiki from tiddlyspot.com.  A ~TiddlyWiki is an electronic notebook that is great for managing todo lists, personal information, and all sorts of things.",
 "",
 "@@font-weight:bold;font-size:1.3em;color:#444; //What now?// &nbsp;&nbsp;@@ Before you can save any changes, you need to enter your password in the form below.  Then configure privacy and other site settings at your [[control panel|http://" + config.tiddlyspotSiteId + ".tiddlyspot.com/controlpanel]] (your control panel username is //" + config.tiddlyspotSiteId + "//).",
 "<<tiddler TspotControls>>",
 "See also GettingStarted.",
 "",
 "@@font-weight:bold;font-size:1.3em;color:#444; //Working online// &nbsp;&nbsp;@@ You can edit this ~TiddlyWiki right now, and save your changes using the \"save to web\" button in the column on the right.",
 "",
 "@@font-weight:bold;font-size:1.3em;color:#444; //Working offline// &nbsp;&nbsp;@@ A fully functioning copy of this ~TiddlyWiki can be saved onto your hard drive or USB stick.  You can make changes and save them locally without being connected to the Internet.  When you're ready to sync up again, just click \"upload\" and your ~TiddlyWiki will be saved back to tiddlyspot.com.",
 "",
 "@@font-weight:bold;font-size:1.3em;color:#444; //Help!// &nbsp;&nbsp;@@ Find out more about ~TiddlyWiki at [[TiddlyWiki.com|http://tiddlywiki.com]].  Also visit [[TiddlyWiki.org|http://tiddlywiki.org]] for documentation on learning and using ~TiddlyWiki. New users are especially welcome on the [[TiddlyWiki mailing list|http://groups.google.com/group/TiddlyWiki]], which is an excellent place to ask questions and get help.  If you have a tiddlyspot related problem email [[tiddlyspot support|mailto:support@tiddlyspot.com]].",
 "",
 "@@font-weight:bold;font-size:1.3em;color:#444; //Enjoy :)// &nbsp;&nbsp;@@ We hope you like using your tiddlyspot.com site.  Please email [[feedback@tiddlyspot.com|mailto:feedback@tiddlyspot.com]] with any comments or suggestions."
].join("\n")

});
//}}}
| !date | !user | !location | !storeUrl | !uploadDir | !toFilename | !backupdir | !origin |
| 01/07/2019 20:39:16 | Sara | [[/|http://ksaws.tiddlyspot.com/]] | [[store.cgi|http://ksaws.tiddlyspot.com/store.cgi]] | . | [[index.html | http://ksaws.tiddlyspot.com/index.html]] | . | ok |
| 01/07/2019 20:40:12 | Sara | [[/|http://ksaws.tiddlyspot.com/]] | [[store.cgi|http://ksaws.tiddlyspot.com/store.cgi]] | . | [[index.html | http://ksaws.tiddlyspot.com/index.html]] | . | ok |
| 01/07/2019 20:41:31 | Sara | [[/|http://ksaws.tiddlyspot.com/]] | [[store.cgi|http://ksaws.tiddlyspot.com/store.cgi]] | . | [[index.html | http://ksaws.tiddlyspot.com/index.html]] | . | ok |
| 01/07/2019 21:24:33 | Sara | [[/|http://ksaws.tiddlyspot.com/]] | [[store.cgi|http://ksaws.tiddlyspot.com/store.cgi]] | . | [[index.html | http://ksaws.tiddlyspot.com/index.html]] | . | ok |
| 01/07/2019 21:45:42 | Sara | [[/|http://ksaws.tiddlyspot.com/]] | [[store.cgi|http://ksaws.tiddlyspot.com/store.cgi]] | . | [[index.html | http://ksaws.tiddlyspot.com/index.html]] | . |
| 02/07/2019 11:34:22 | Sara | [[/|http://ksaws.tiddlyspot.com/]] | [[store.cgi|http://ksaws.tiddlyspot.com/store.cgi]] | . | [[index.html | http://ksaws.tiddlyspot.com/index.html]] | . | ok |
| 02/07/2019 11:37:07 | Sara | [[/|http://ksaws.tiddlyspot.com/]] | [[store.cgi|http://ksaws.tiddlyspot.com/store.cgi]] | . | [[index.html | http://ksaws.tiddlyspot.com/index.html]] | . | ok |
| 02/07/2019 12:07:26 | Sara | [[/|http://ksaws.tiddlyspot.com/]] | [[store.cgi|http://ksaws.tiddlyspot.com/store.cgi]] | . | [[index.html | http://ksaws.tiddlyspot.com/index.html]] | . |
| 04/07/2019 22:32:26 | Sara | [[/|http://ksaws.tiddlyspot.com/]] | [[store.cgi|http://ksaws.tiddlyspot.com/store.cgi]] | . | [[index.html | http://ksaws.tiddlyspot.com/index.html]] | . | ok |
| 04/07/2019 22:33:27 | Sara | [[/|http://ksaws.tiddlyspot.com/]] | [[store.cgi|http://ksaws.tiddlyspot.com/store.cgi]] | . | [[index.html | http://ksaws.tiddlyspot.com/index.html]] | . |
/***
|''Name:''|UploadPlugin|
|''Description:''|Save to web a TiddlyWiki|
|''Version:''|4.1.3|
|''Date:''|Feb 24, 2008|
|''Source:''|http://tiddlywiki.bidix.info/#UploadPlugin|
|''Documentation:''|http://tiddlywiki.bidix.info/#UploadPluginDoc|
|''Author:''|BidiX (BidiX (at) bidix (dot) info)|
|''License:''|[[BSD open source license|http://tiddlywiki.bidix.info/#%5B%5BBSD%20open%20source%20license%5D%5D ]]|
|''~CoreVersion:''|2.2.0|
|''Requires:''|PasswordOptionPlugin|
***/
//{{{
version.extensions.UploadPlugin = {
	major: 4, minor: 1, revision: 3,
	date: new Date("Feb 24, 2008"),
	source: 'http://tiddlywiki.bidix.info/#UploadPlugin',
	author: 'BidiX (BidiX (at) bidix (dot) info',
	coreVersion: '2.2.0'
};

//
// Environment
//

if (!window.bidix) window.bidix = {}; // bidix namespace
bidix.debugMode = false;	// true to activate both in Plugin and UploadService
	
//
// Upload Macro
//

config.macros.upload = {
// default values
	defaultBackupDir: '',	//no backup
	defaultStoreScript: "store.php",
	defaultToFilename: "index.html",
	defaultUploadDir: ".",
	authenticateUser: true	// UploadService Authenticate User
};
	
config.macros.upload.label = {
	promptOption: "Save and Upload this TiddlyWiki with UploadOptions",
	promptParamMacro: "Save and Upload this TiddlyWiki in %0",
	saveLabel: "save to web", 
	saveToDisk: "save to disk",
	uploadLabel: "upload"	
};

config.macros.upload.messages = {
	noStoreUrl: "No store URL in parmeters or options",
	usernameOrPasswordMissing: "Username or password missing"
};

config.macros.upload.handler = function(place,macroName,params) {
	if (readOnly)
		return;
	var label;
	if (document.location.toString().substr(0,4) == "http") 
		label = this.label.saveLabel;
	else
		label = this.label.uploadLabel;
	var prompt;
	if (params[0]) {
		prompt = this.label.promptParamMacro.toString().format([this.destFile(params[0], 
			(params[1] ? params[1]:bidix.basename(window.location.toString())), params[3])]);
	} else {
		prompt = this.label.promptOption;
	}
	createTiddlyButton(place, label, prompt, function() {config.macros.upload.action(params);}, null, null, this.accessKey);
};

config.macros.upload.action = function(params)
{
		// for missing macro parameter set value from options
		if (!params) params = {};
		var storeUrl = params[0] ? params[0] : config.options.txtUploadStoreUrl;
		var toFilename = params[1] ? params[1] : config.options.txtUploadFilename;
		var backupDir = params[2] ? params[2] : config.options.txtUploadBackupDir;
		var uploadDir = params[3] ? params[3] : config.options.txtUploadDir;
		var username = params[4] ? params[4] : config.options.txtUploadUserName;
		var password = config.options.pasUploadPassword; // for security reason no password as macro parameter	
		// for still missing parameter set default value
		if ((!storeUrl) && (document.location.toString().substr(0,4) == "http")) 
			storeUrl = bidix.dirname(document.location.toString())+'/'+config.macros.upload.defaultStoreScript;
		if (storeUrl.substr(0,4) != "http")
			storeUrl = bidix.dirname(document.location.toString()) +'/'+ storeUrl;
		if (!toFilename)
			toFilename = bidix.basename(window.location.toString());
		if (!toFilename)
			toFilename = config.macros.upload.defaultToFilename;
		if (!uploadDir)
			uploadDir = config.macros.upload.defaultUploadDir;
		if (!backupDir)
			backupDir = config.macros.upload.defaultBackupDir;
		// report error if still missing
		if (!storeUrl) {
			alert(config.macros.upload.messages.noStoreUrl);
			clearMessage();
			return false;
		}
		if (config.macros.upload.authenticateUser && (!username || !password)) {
			alert(config.macros.upload.messages.usernameOrPasswordMissing);
			clearMessage();
			return false;
		}
		bidix.upload.uploadChanges(false,null,storeUrl, toFilename, uploadDir, backupDir, username, password); 
		return false; 
};

config.macros.upload.destFile = function(storeUrl, toFilename, uploadDir) 
{
	if (!storeUrl)
		return null;
		var dest = bidix.dirname(storeUrl);
		if (uploadDir && uploadDir != '.')
			dest = dest + '/' + uploadDir;
		dest = dest + '/' + toFilename;
	return dest;
};

//
// uploadOptions Macro
//

config.macros.uploadOptions = {
	handler: function(place,macroName,params) {
		var wizard = new Wizard();
		wizard.createWizard(place,this.wizardTitle);
		wizard.addStep(this.step1Title,this.step1Html);
		var markList = wizard.getElement("markList");
		var listWrapper = document.createElement("div");
		markList.parentNode.insertBefore(listWrapper,markList);
		wizard.setValue("listWrapper",listWrapper);
		this.refreshOptions(listWrapper,false);
		var uploadCaption;
		if (document.location.toString().substr(0,4) == "http") 
			uploadCaption = config.macros.upload.label.saveLabel;
		else
			uploadCaption = config.macros.upload.label.uploadLabel;
		
		wizard.setButtons([
				{caption: uploadCaption, tooltip: config.macros.upload.label.promptOption, 
					onClick: config.macros.upload.action},
				{caption: this.cancelButton, tooltip: this.cancelButtonPrompt, onClick: this.onCancel}
				
			]);
	},
	options: [
		"txtUploadUserName",
		"pasUploadPassword",
		"txtUploadStoreUrl",
		"txtUploadDir",
		"txtUploadFilename",
		"txtUploadBackupDir",
		"chkUploadLog",
		"txtUploadLogMaxLine"		
	],
	refreshOptions: function(listWrapper) {
		var opts = [];
		for(i=0; i<this.options.length; i++) {
			var opt = {};
			opts.push();
			opt.option = "";
			n = this.options[i];
			opt.name = n;
			opt.lowlight = !config.optionsDesc[n];
			opt.description = opt.lowlight ? this.unknownDescription : config.optionsDesc[n];
			opts.push(opt);
		}
		var listview = ListView.create(listWrapper,opts,this.listViewTemplate);
		for(n=0; n<opts.length; n++) {
			var type = opts[n].name.substr(0,3);
			var h = config.macros.option.types[type];
			if (h && h.create) {
				h.create(opts[n].colElements['option'],type,opts[n].name,opts[n].name,"no");
			}
		}
		
	},
	onCancel: function(e)
	{
		backstage.switchTab(null);
		return false;
	},
	
	wizardTitle: "Upload with options",
	step1Title: "These options are saved in cookies in your browser",
	step1Html: "<input type='hidden' name='markList'></input><br>",
	cancelButton: "Cancel",
	cancelButtonPrompt: "Cancel prompt",
	listViewTemplate: {
		columns: [
			{name: 'Description', field: 'description', title: "Description", type: 'WikiText'},
			{name: 'Option', field: 'option', title: "Option", type: 'String'},
			{name: 'Name', field: 'name', title: "Name", type: 'String'}
			],
		rowClasses: [
			{className: 'lowlight', field: 'lowlight'} 
			]}
};

//
// upload functions
//

if (!bidix.upload) bidix.upload = {};

if (!bidix.upload.messages) bidix.upload.messages = {
	//from saving
	invalidFileError: "The original file '%0' does not appear to be a valid TiddlyWiki",
	backupSaved: "Backup saved",
	backupFailed: "Failed to upload backup file",
	rssSaved: "RSS feed uploaded",
	rssFailed: "Failed to upload RSS feed file",
	emptySaved: "Empty template uploaded",
	emptyFailed: "Failed to upload empty template file",
	mainSaved: "Main TiddlyWiki file uploaded",
	mainFailed: "Failed to upload main TiddlyWiki file. Your changes have not been saved",
	//specific upload
	loadOriginalHttpPostError: "Can't get original file",
	aboutToSaveOnHttpPost: 'About to upload on %0 ...',
	storePhpNotFound: "The store script '%0' was not found."
};

bidix.upload.uploadChanges = function(onlyIfDirty,tiddlers,storeUrl,toFilename,uploadDir,backupDir,username,password)
{
	var callback = function(status,uploadParams,original,url,xhr) {
		if (!status) {
			displayMessage(bidix.upload.messages.loadOriginalHttpPostError);
			return;
		}
		if (bidix.debugMode) 
			alert(original.substr(0,500)+"\n...");
		// Locate the storeArea div's 
		var posDiv = locateStoreArea(original);
		if((posDiv[0] == -1) || (posDiv[1] == -1)) {
			alert(config.messages.invalidFileError.format([localPath]));
			return;
		}
		bidix.upload.uploadRss(uploadParams,original,posDiv);
	};
	
	if(onlyIfDirty && !store.isDirty())
		return;
	clearMessage();
	// save on localdisk ?
	if (document.location.toString().substr(0,4) == "file") {
		var path = document.location.toString();
		var localPath = getLocalPath(path);
		saveChanges();
	}
	// get original
	var uploadParams = new Array(storeUrl,toFilename,uploadDir,backupDir,username,password);
	var originalPath = document.location.toString();
	// If url is a directory : add index.html
	if (originalPath.charAt(originalPath.length-1) == "/")
		originalPath = originalPath + "index.html";
	var dest = config.macros.upload.destFile(storeUrl,toFilename,uploadDir);
	var log = new bidix.UploadLog();
	log.startUpload(storeUrl, dest, uploadDir,  backupDir);
	displayMessage(bidix.upload.messages.aboutToSaveOnHttpPost.format([dest]));
	if (bidix.debugMode) 
		alert("about to execute Http - GET on "+originalPath);
	var r = doHttp("GET",originalPath,null,null,username,password,callback,uploadParams,null);
	if (typeof r == "string")
		displayMessage(r);
	return r;
};

bidix.upload.uploadRss = function(uploadParams,original,posDiv) 
{
	var callback = function(status,params,responseText,url,xhr) {
		if(status) {
			var destfile = responseText.substring(responseText.indexOf("destfile:")+9,responseText.indexOf("\n", responseText.indexOf("destfile:")));
			displayMessage(bidix.upload.messages.rssSaved,bidix.dirname(url)+'/'+destfile);
			bidix.upload.uploadMain(params[0],params[1],params[2]);
		} else {
			displayMessage(bidix.upload.messages.rssFailed);			
		}
	};
	// do uploadRss
	if(config.options.chkGenerateAnRssFeed) {
		var rssPath = uploadParams[1].substr(0,uploadParams[1].lastIndexOf(".")) + ".xml";
		var rssUploadParams = new Array(uploadParams[0],rssPath,uploadParams[2],'',uploadParams[4],uploadParams[5]);
		var rssString = generateRss();
		// no UnicodeToUTF8 conversion needed when location is "file" !!!
		if (document.location.toString().substr(0,4) != "file")
			rssString = convertUnicodeToUTF8(rssString);	
		bidix.upload.httpUpload(rssUploadParams,rssString,callback,Array(uploadParams,original,posDiv));
	} else {
		bidix.upload.uploadMain(uploadParams,original,posDiv);
	}
};

bidix.upload.uploadMain = function(uploadParams,original,posDiv) 
{
	var callback = function(status,params,responseText,url,xhr) {
		var log = new bidix.UploadLog();
		if(status) {
			// if backupDir specified
			if ((params[3]) && (responseText.indexOf("backupfile:") > -1))  {
				var backupfile = responseText.substring(responseText.indexOf("backupfile:")+11,responseText.indexOf("\n", responseText.indexOf("backupfile:")));
				displayMessage(bidix.upload.messages.backupSaved,bidix.dirname(url)+'/'+backupfile);
			}
			var destfile = responseText.substring(responseText.indexOf("destfile:")+9,responseText.indexOf("\n", responseText.indexOf("destfile:")));
			displayMessage(bidix.upload.messages.mainSaved,bidix.dirname(url)+'/'+destfile);
			store.setDirty(false);
			log.endUpload("ok");
		} else {
			alert(bidix.upload.messages.mainFailed);
			displayMessage(bidix.upload.messages.mainFailed);
			log.endUpload("failed");			
		}
	};
	// do uploadMain
	var revised = bidix.upload.updateOriginal(original,posDiv);
	bidix.upload.httpUpload(uploadParams,revised,callback,uploadParams);
};

bidix.upload.httpUpload = function(uploadParams,data,callback,params)
{
	var localCallback = function(status,params,responseText,url,xhr) {
		url = (url.indexOf("nocache=") < 0 ? url : url.substring(0,url.indexOf("nocache=")-1));
		if (xhr.status == 404)
			alert(bidix.upload.messages.storePhpNotFound.format([url]));
		if ((bidix.debugMode) || (responseText.indexOf("Debug mode") >= 0 )) {
			alert(responseText);
			if (responseText.indexOf("Debug mode") >= 0 )
				responseText = responseText.substring(responseText.indexOf("\n\n")+2);
		} else if (responseText.charAt(0) != '0') 
			alert(responseText);
		if (responseText.charAt(0) != '0')
			status = null;
		callback(status,params,responseText,url,xhr);
	};
	// do httpUpload
	var boundary = "---------------------------"+"AaB03x";	
	var uploadFormName = "UploadPlugin";
	// compose headers data
	var sheader = "";
	sheader += "--" + boundary + "\r\nContent-disposition: form-data; name=\"";
	sheader += uploadFormName +"\"\r\n\r\n";
	sheader += "backupDir="+uploadParams[3] +
				";user=" + uploadParams[4] +
				";password=" + uploadParams[5] +
				";uploaddir=" + uploadParams[2];
	if (bidix.debugMode)
		sheader += ";debug=1";
	sheader += ";;\r\n"; 
	sheader += "\r\n" + "--" + boundary + "\r\n";
	sheader += "Content-disposition: form-data; name=\"userfile\"; filename=\""+uploadParams[1]+"\"\r\n";
	sheader += "Content-Type: text/html;charset=UTF-8" + "\r\n";
	sheader += "Content-Length: " + data.length + "\r\n\r\n";
	// compose trailer data
	var strailer = new String();
	strailer = "\r\n--" + boundary + "--\r\n";
	data = sheader + data + strailer;
	if (bidix.debugMode) alert("about to execute Http - POST on "+uploadParams[0]+"\n with \n"+data.substr(0,500)+ " ... ");
	var r = doHttp("POST",uploadParams[0],data,"multipart/form-data; ;charset=UTF-8; boundary="+boundary,uploadParams[4],uploadParams[5],localCallback,params,null);
	if (typeof r == "string")
		displayMessage(r);
	return r;
};

// same as Saving's updateOriginal but without convertUnicodeToUTF8 calls
bidix.upload.updateOriginal = function(original, posDiv)
{
	if (!posDiv)
		posDiv = locateStoreArea(original);
	if((posDiv[0] == -1) || (posDiv[1] == -1)) {
		alert(config.messages.invalidFileError.format([localPath]));
		return;
	}
	var revised = original.substr(0,posDiv[0] + startSaveArea.length) + "\n" +
				store.allTiddlersAsHtml() + "\n" +
				original.substr(posDiv[1]);
	var newSiteTitle = getPageTitle().htmlEncode();
	revised = revised.replaceChunk("<title"+">","</title"+">"," " + newSiteTitle + " ");
	revised = updateMarkupBlock(revised,"PRE-HEAD","MarkupPreHead");
	revised = updateMarkupBlock(revised,"POST-HEAD","MarkupPostHead");
	revised = updateMarkupBlock(revised,"PRE-BODY","MarkupPreBody");
	revised = updateMarkupBlock(revised,"POST-SCRIPT","MarkupPostBody");
	return revised;
};

//
// UploadLog
// 
// config.options.chkUploadLog :
//		false : no logging
//		true : logging
// config.options.txtUploadLogMaxLine :
//		-1 : no limit
//      0 :  no Log lines but UploadLog is still in place
//		n :  the last n lines are only kept
//		NaN : no limit (-1)

bidix.UploadLog = function() {
	if (!config.options.chkUploadLog) 
		return; // this.tiddler = null
	this.tiddler = store.getTiddler("UploadLog");
	if (!this.tiddler) {
		this.tiddler = new Tiddler();
		this.tiddler.title = "UploadLog";
		this.tiddler.text = "| !date | !user | !location | !storeUrl | !uploadDir | !toFilename | !backupdir | !origin |";
		this.tiddler.created = new Date();
		this.tiddler.modifier = config.options.txtUserName;
		this.tiddler.modified = new Date();
		store.addTiddler(this.tiddler);
	}
	return this;
};

bidix.UploadLog.prototype.addText = function(text) {
	if (!this.tiddler)
		return;
	// retrieve maxLine when we need it
	var maxLine = parseInt(config.options.txtUploadLogMaxLine,10);
	if (isNaN(maxLine))
		maxLine = -1;
	// add text
	if (maxLine != 0) 
		this.tiddler.text = this.tiddler.text + text;
	// Trunck to maxLine
	if (maxLine >= 0) {
		var textArray = this.tiddler.text.split('\n');
		if (textArray.length > maxLine + 1)
			textArray.splice(1,textArray.length-1-maxLine);
			this.tiddler.text = textArray.join('\n');		
	}
	// update tiddler fields
	this.tiddler.modifier = config.options.txtUserName;
	this.tiddler.modified = new Date();
	store.addTiddler(this.tiddler);
	// refresh and notifiy for immediate update
	story.refreshTiddler(this.tiddler.title);
	store.notify(this.tiddler.title, true);
};

bidix.UploadLog.prototype.startUpload = function(storeUrl, toFilename, uploadDir,  backupDir) {
	if (!this.tiddler)
		return;
	var now = new Date();
	var text = "\n| ";
	var filename = bidix.basename(document.location.toString());
	if (!filename) filename = '/';
	text += now.formatString("0DD/0MM/YYYY 0hh:0mm:0ss") +" | ";
	text += config.options.txtUserName + " | ";
	text += "[["+filename+"|"+location + "]] |";
	text += " [[" + bidix.basename(storeUrl) + "|" + storeUrl + "]] | ";
	text += uploadDir + " | ";
	text += "[[" + bidix.basename(toFilename) + " | " +toFilename + "]] | ";
	text += backupDir + " |";
	this.addText(text);
};

bidix.UploadLog.prototype.endUpload = function(status) {
	if (!this.tiddler)
		return;
	this.addText(" "+status+" |");
};

//
// Utilities
// 

bidix.checkPlugin = function(plugin, major, minor, revision) {
	var ext = version.extensions[plugin];
	if (!
		(ext  && 
			((ext.major > major) || 
			((ext.major == major) && (ext.minor > minor))  ||
			((ext.major == major) && (ext.minor == minor) && (ext.revision >= revision))))) {
			// write error in PluginManager
			if (pluginInfo)
				pluginInfo.log.push("Requires " + plugin + " " + major + "." + minor + "." + revision);
			eval(plugin); // generate an error : "Error: ReferenceError: xxxx is not defined"
	}
};

bidix.dirname = function(filePath) {
	if (!filePath) 
		return;
	var lastpos;
	if ((lastpos = filePath.lastIndexOf("/")) != -1) {
		return filePath.substring(0, lastpos);
	} else {
		return filePath.substring(0, filePath.lastIndexOf("\\"));
	}
};

bidix.basename = function(filePath) {
	if (!filePath) 
		return;
	var lastpos;
	if ((lastpos = filePath.lastIndexOf("#")) != -1) 
		filePath = filePath.substring(0, lastpos);
	if ((lastpos = filePath.lastIndexOf("/")) != -1) {
		return filePath.substring(lastpos + 1);
	} else
		return filePath.substring(filePath.lastIndexOf("\\")+1);
};

bidix.initOption = function(name,value) {
	if (!config.options[name])
		config.options[name] = value;
};

//
// Initializations
//

// require PasswordOptionPlugin 1.0.1 or better
bidix.checkPlugin("PasswordOptionPlugin", 1, 0, 1);

// styleSheet
setStylesheet('.txtUploadStoreUrl, .txtUploadBackupDir, .txtUploadDir {width: 22em;}',"uploadPluginStyles");

//optionsDesc
merge(config.optionsDesc,{
	txtUploadStoreUrl: "Url of the UploadService script (default: store.php)",
	txtUploadFilename: "Filename of the uploaded file (default: in index.html)",
	txtUploadDir: "Relative Directory where to store the file (default: . (downloadService directory))",
	txtUploadBackupDir: "Relative Directory where to backup the file. If empty no backup. (default: ''(empty))",
	txtUploadUserName: "Upload Username",
	pasUploadPassword: "Upload Password",
	chkUploadLog: "do Logging in UploadLog (default: true)",
	txtUploadLogMaxLine: "Maximum of lines in UploadLog (default: 10)"
});

// Options Initializations
bidix.initOption('txtUploadStoreUrl','');
bidix.initOption('txtUploadFilename','');
bidix.initOption('txtUploadDir','');
bidix.initOption('txtUploadBackupDir','');
bidix.initOption('txtUploadUserName','');
bidix.initOption('pasUploadPassword','');
bidix.initOption('chkUploadLog',true);
bidix.initOption('txtUploadLogMaxLine','10');


// Backstage
merge(config.tasks,{
	uploadOptions: {text: "upload", tooltip: "Change UploadOptions and Upload", content: '<<uploadOptions>>'}
});
config.backstageTasks.push("uploadOptions");


//}}}

X1 instances are optimized for large-scale, enterprise-class and in-memory applications, and offer one of the lowest price per GiB of RAM among Amazon EC2 instance types.

Features:

High frequency Intel Xeon E7-8880 v3 (Haswell) processors
One of the lowest prices per GiB of RAM
Up to 1,952 GiB of DRAM-based instance memory
SSD instance storage for temporary block-level storage and EBS-optimized by default at no additional cost
Ability to control processor C-state and P-state configuration
Model	vCPU	Mem (GiB)	SSD Storage (GB)	Dedicated EBS Bandwidth (Mbps)	Network Performance
x1.16xlarge	64	976	1 x 1,920	7,000	10 Gigabit
x1.32xlarge	128	1,952	2 x 1,920	14,000	25 Gigabit
All instances have the following specs:

2.3 GHz Intel Xeon E7-8880 v3 Processor
Intel AVX†, Intel AVX2†, Intel Turbo
EBS Optimized
Enhanced Networking†
Use Cases

In-memory databases (e.g. SAP HANA), big data processing engines (e.g. Apache Spark or Presto), high performance computing (HPC). Certified by SAP to run Business Warehouse on HANA (BW), Data Mart Solutions on HANA, Business Suite on HANA (SoH), Business Suite S/4HANA.
X1e instances are optimized for high-performance databases, in-memory databases and other memory intensive enterprise applications. X1e instances offer one of the lowest price per GiB of RAM among Amazon EC2 instance types.

Features:

High frequency Intel Xeon E7-8880 v3 (Haswell) processors
One of the lowest prices per GiB of RAM
Up to 3,904 GiB of DRAM-based instance memory
SSD instance storage for temporary block-level storage and EBS-optimized by default at no additional cost
Ability to control processor C-state and P-state configurations on x1e.32xlarge, x1e.16xlarge and x1e.8xlarge instances
Model	vCPU	Mem (GiB)	SSD Storage (GB)	Dedicated EBS Bandwidth (Mbps)	Networking Performance
x1e.xlarge	4	122	1 x 120	500	Up to 10 Gigabit
x1e.2xlarge	8	244	1 x 240	1,000	Up to 10 Gigabit
x1e.4xlarge	16	488	1 x 480	1,750	Up to 10 Gigabit
x1e.8xlarge	32	976	1 x 960	3,500	Up to 10 Gigabit
x1e.16xlarge	64	1,952	1 x 1,920	7,000	10 Gigabit
x1e.32xlarge	128	3,904	2 x 1,920	14,000	25 Gigabit
All instances have the following specs:

2.3 GHz Intel Xeon E7-8880 v3 Processor
Intel AVX†, Intel AVX2†
EBS Optimized
Enhanced Networking†
In addition, x1e.16xlarge and x1e.32xlarge have

Intel Turbo
Use Cases

High performance databases, in-memory databases (e.g. SAP HANA) and memory intensive applications. x1e.32xlarge instance certified by SAP to run next-generation Business Suite S/4HANA, Business Suite on HANA (SoH), Business Warehouse on HANA (BW), and Data Mart Solutions on HANA on the AWS cloud.
[[Monitoring Memory and Disk Metrics for Amazon EC2 Linux Instances|https://www.youtube.com/watch?v=PiFv8Hh7PLY]]
[[How to create a windows server in AWS and how to access it #video 1|https://www.youtube.com/watch?v=20itfTxKEn8]]
[[How to create redhat linux instance in AWS and how to access it from windows putty client #video 2|https://www.youtube.com/watch?v=KnEvVwfKPwI]]
[[How to access linux instance in AWS from web browser #video 3|https://www.youtube.com/watch?v=VAOBWDhhAOE]]
[[How to transfer files from local windows to AWS linux instance using winscp #video 4|https://www.youtube.com/watch?v=nSX4GjnmGlU]]
[[HowTo: set a GUI in a Ubuntu AWS EC2 instance|https://www.youtube.com/watch?v=9BAoJ7JZHr0]]
[[How to Use a GUI with Ubuntu Linux on AWS EC2|https://www.youtube.com/watch?v=6x_okhl_CF4]]
[[Launching Your First AWS Linux EC2 Instance|https://www.youtube.com/watch?v=kjrKDtxAZpE]]
[[Amazon Virtual Private Cloud (VPC) | AWS Tutorial For Beginners | AWS Training Video | Simplilearn|https://www.youtube.com/watch?v=fpxDGU2KdkA]]
[[how to push source code from linux to github using git in linux #video 5|https://www.youtube.com/watch?v=CKBv9sn0Dz4]]
[[1.6: Cloning Repo and Push/Pull - Git and GitHub for Poets|https://www.youtube.com/watch?v=yXT1ElMEkW8]]

[[Git and GitHub for Poets|https://www.youtube.com/playlist?list=PLRqwX-V7Uu6ZF9C0YMKuns9sLDzK6zoiV]]
[[Session 2: Regular Expressions - Programming with Text|https://www.youtube.com/playlist?list=PLRqwX-V7Uu6YEypLuls7iidwHMdCM6o2w]]


[[Continuent Webinar: Tungsten vs RDS|https://www.youtube.com/watch?v=6Tv4_io8xbE&feature=youtu.be]] slides: http://continuent-videos.s3.amazonaws.com/Continuent-Webinar-Tungsten_vs_RDS-20170621.pdf




1. 9:12 AWS In 10 Minutes | AWS Tutorial For Beginners | AWS Training Video | AWS Tutorial | Simplilearn
2. 10:03 What is AWS | What is Amazon Web Services | AWS Tutorial for Beginners | AWS Training | Simplilearn Simplilearn
3. 23:18 AWS Tutorial for Beginners | AWS Certified Solutions Architect Tutorial | AWS Tutorial | Simplilearn Simplilearn 
4. 24:51 AWS Training Video | AWS Certified Solutions Architect Training | AWS Tutorial | Simplilearn Simplilearn
5. 9:11  AWS Certification In 10 Minutes | Choosing The Right AWS Certification | AWS Training | Simplilearn Simplilearn
6. 5:22 Top 10 Reasons to Choose AWS | Why AWS? | AWS Services | AWS Tutorial for Beginners | Simplilearn Simplilearn
7

10:17
How To Become An AWS Solutions Architect | Getting Started With AWS | How To Learn AWS | Simplilearn
Simplilearn
8

22:17
AWS EC2 Tutorial For Beginners | What Is AWS EC2? | AWS EC2 Tutorial | AWS Training | Simplilearn
Simplilearn
9

45:12
AWS S3 Tutorial For Beginners | AWS S3 Bucket Tutorial | AWS Tutorial For Beginners | Simplilearn
Simplilearn
10

55:57
Amazon Virtual Private Cloud (VPC) | AWS Tutorial For Beginners | AWS Training Video | Simplilearn
Simplilearn
11

43:31
AWS IAM Tutorial | AWS Identity And Access Management | AWS Tutorial For Beginners | Simplilearn
Simplilearn
12

17:41
AWS Lambda Tutorial For Beginners | What is AWS Lambda? | AWS Tutorial For Beginners | Simplilearn
Simplilearn
13

41:51
AWS CloudFormation Tutorial | AWS CloudFormation Demo | AWS Tutorial For Beginners | Simplilearn
Simplilearn
14

9:05
AWS DynamoDB Tutorial | AWS Services | AWS Tutorial For Beginners | AWS Training Video | Simplilearn
Simplilearn
15

1:31:16
AWS Interview Questions Part - 1 | AWS Interview Questions And Answers Part - 1 | Simplilearn
Simplilearn
16

1:01:56
AWS Interview Questions Part - 2 | AWS Interview Questions And Answers Part - 2 | Simplilearn
Simplilearn
17

12:41
What is Cloud Computing? | Cloud Computing Tutorial for Beginners | Cloud Computing | Simplilearn
Simplilearn
18

24:38
Cloud Computing Tutorial for Beginners | Cloud Computing Explained | Cloud Computing | Simplilearn
Simplilearn
19

39:41
Introduction To Amazon Web Services | AWS Tutorial For Beginners | AWS Training Video | Simplilearn
Simplilearn
20

2:20:25
AWS Tutorial For Beginners - 1 | AWS Tutorial | AWS Training Videos | AWS Services | Simplilearn
Simplilearn
21

21:13
AWS S3 Tutorial For Beginners | Amazon S3 Tutorial | Amazon Simple Storage Service | Simplilearn
Simplilearn
22

8:23
AWS Lambda Tutorial | Amazon Lambda Tutorial | AWS Tutorial | AWS Training Video | Simplilearn
Simplilearn
23

9:05
What is AWS? | AWS Tutorial For Beginners | Introduction to Amazon Web Services | Simplilearn
Simplilearn
24

5:40
AWS CloudFormation Tutorial | AWS Tutorial For Beginners | Simplilearn
Simplilearn
25

13:06
AWS VPC Tutorial | Amazon Virtual Private Cloud | AWS Training Videos | Simplilearn
Simplilearn
26

13:38
AWS EC2 Tutorial | Amazon EC2 Tutorial For Beginners | AWS Tutorial | AWS Services | Simplilearn
Simplilearn
27

39:55
AWS IAM Tutorial | AWS Identity And Access Management | AWS Tutorial | AWS Training | Simplilearn
Simplilearn
28

2:52
AWS CloudFront Tutorial | AWS Tutorial For Beginners | Simplilearn
Simplilearn
29

2:10
Why You Should Choose Simplilearn's AWS Solution Architect Associate Training | Simplilearn
Simplilearn
30

7:46
Amazon S3 Tutorial For Beginners | AWS S3 Bucket Tutorial |AWS Tutorial For Beginners | Simplilearn
Simplilearn
31

3:28:35
AWS Tutorial For Beginners - 2 | AWS Certified Solutions Architect Associate Tutorial | Simplilearn
Simplilearn
32

2:49:40
AWS Tutorial For Beginners - 4 | AWS VPC Tutorial | AWS Services | AWS Training Video | Simplilearn
Simplilearn
33

2:25
AWS IAM Tutorial | Identity And Access Management | AWS Training Videos | Simplilearn
Simplilearn
34

9:00
Why AWS? | AWS Tutorial For Beginners | Simplilearn
Simplilearn
35

10:32
What Is AWS | AWS Tutorial For Beginners | Simplilearn
Simplilearn
36

1:12:36
Planning And Designing Cloud Infrastructure | AWS Training Videos | Simplilearn
Simplilearn
37

15:01
AWS Best Practices | AWS Tutorial For Beginners | Simplilearn
Simplilearn
38

3:53
AWS Certified Solutions architect associate Level | Simplilearn
Simplilearn
39

3:19:48
AWS Tutorial For Beginners - 3 | AWS IAM Tutorial | AWS Services | AWS Training Video | Simplilearn
Simplilearn
40

11:47
Introduction To AWS Technical Essentials Certification Training | Simplilearn
Simplilearn
41

6:59
Amazon Glacier Tutorial | AWS Glacier Tutorial | AWS Tutorial | AWS Services | Simplilearn
Simplilearn
42

26:12
AWS Tutorial | Storage and Content Delivery | AWS Training Video | Simplilearn
Simplilearn
43

33:25
History And Evolution Of AWS | AWS Training Videos | Simplilearn
Simplilearn
44

3:38
AWS Redshift Tutorial For Beginners | Amazon Redshift Tutorial | AWS Training Video | Simplilearn
Simplilearn
45

2:22
AWS vs Azure - The Battle Of The Titans | AWS vs Azure Comparison | Azure vs AWS | Simplilearn
Simplilearn
46

[Private video]
47

7:52
AWS CloudWatch Tutorial | What is AWS CloudWatch | AWS Tutorial | AWS Training Videos | Simplilearn
Simplilearn
48

4:40
How To Create AWS Account | AWS Tutorial For Beginners | Simplilearn
Simplilearn
49

11:59
AWS ELB Tutorial | Elastic Load Balancer Tutorial | AWS Tutorial | AWS Training Video | Simplilearn
Simplilearn
50

4:38
Introduction To AWS SysOps Associate Certification Training | Simplilearn
Simplilearn
51

4:31
Introduction To AWS Developer Associate Certification | Simplilearn
Simplilearn
52

3:40
AWS Foundation Services: Networking | AWS Tutorial | Simplilearn
Simplilearn
53

7:46
AWS EBS Tutorial | Amazon Elastic Block Store | Simplilearn
Simplilearn
54

16:39
AWS Foundation Services: Compute | AWS Tutorial | Simplilearn
Simplilearn
55

6:09
AWS Foundation Services: Databases | AWS Tutorial | Simplilearn
Simplilearn
56

11:29
AWS Foundation Services: Storage | AWS Tutorial | Simplilearn
Simplilearn
57

2:19
AWS Marketplace Tutorial | AWS Tutorial For Beginners | Simplilearn
Simplilearn
58

59:11
AWS Architecture Tutorial | AWS Tutorial For Beginners | Simplilearn
Simplilearn
59

29:23
AWS Configuration Management Tutorial | AWS Tutorial For Beginners | Simplilearn
Simplilearn
60

3:01
A Look at Cloud Computing Landscape|What is AWS?| Cloud Video Tutorials | Simplilearn
Simplilearn
61

8:17
AWS Security Tutorial | AWS Tutorial For Beginners | Simplilearn
Simplilearn
62

4:17
Introduction to AWS Database Migration Service Course | Simplilearn
Simplilearn
63

4:58
Introduction To AWS Lambda Training | Simplilearn
Simplilearn
64

3:28
Introduction to Advanced Cloud Computing with AWS Certification Training | Simplilearn
Simplilearn
65

6:17
AWS Cloud Economics | AWS Tutorial For Beginners | Simplilearn
Simplilearn
66

[Private video]
[[When Should I Use Amazon Aurora and When Should I use RDS MySQL?|https://www.percona.com/blog/2018/07/17/when-should-i-use-amazon-aurora-and-when-should-i-use-rds-mysql/]]
[[Create an Amazon VPC for Use with an Amazon RDS DB Instance|https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Tutorials.WebServerDB.CreateVPC.html]]
!AWS Shield Features
!AWS Shield Advanced
''Enhanced detection''
AWS Shield Advanced provides enhanced detection, inspecting network flows of traffic to your protected Elastic IP address, Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator or Amazon Route 53 resources. Using additional techniques like resource specific monitoring, AWS Shield Advanced uses resource and region specific granular detection of DDoS attacks. AWS Shield Advanced also detects application layer DDoS attacks like HTTP floods or DNS query floods by baselining traffic on your resource and identifying anomalies.

''Advanced attack mitigation''

AWS Shield Advanced provides you with more sophisticated automatic mitigations for attacks targeting your applications running on protected Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator, and Amazon Route 53 resources. Using advanced routing techniques, AWS Shield Advanced automatically provides additional mitigation capacity to protect against larger DDoS attacks. For customers with Business / Enterprise support, the AWS DDoS Response Team (DRT) also applies manual mitigations for more complex and sophisticated DDoS attacks. For application layer attacks, you can use AWS WAF to respond to incidents. With AWS WAF you can set up proactive rules like Rate Based Blacklisting to automatically block bad traffic, or respond immediately to incidents as they happen. There is no additional charge for using AWS WAF for application layer protection on AWS Shield Advanced protected resources. You can also engage directly with the DRT to place AWS WAF rules on your behalf, in response to an application layer DDoS attack. The DRT will diagnose the attack and, with your permission, can apply mitigations on your behalf.

''Visibility and attack notification''

AWS Shield Advanced gives you complete visibility into DDoS attacks with near real-time notification via Amazon CloudWatch and detailed diagnostics on the “AWS WAF and AWS Shield” Management Console or APIs. You can also view a summary of prior attacks from the “AWS WAF and AWS Shield” Management Console.

''DDoS cost protection''
AWS Shield Advanced comes with “DDoS cost protection”, a safeguard from scaling charges as a result of a DDoS attack that cause usage spikes on protected Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator, or Amazon Route 53. If any of these protected resources scale up in response to a DDoS attack, AWS will provide AWS Shield service credits for charges due to usage spikes. For more details on how to request service credits, please go to AWS WAF and AWS Shield Advanced Documentation.

''Specialized support''
For customers on Business or Enterprise support plans, AWS Shield Advanced gives you 24x7 access to the AWS DDoS Response Team (DRT), who can be engaged before, during, or after a DDoS attack. The DRT will help triage the incidents, identify root causes, and apply mitigations on your behalf.

''Global availability''
AWS Shield Advanced is available globally on all Amazon CloudFront, AWS Global Accelerator, and Amazon Route 53 edge locations. You can protect your web applications hosted anywhere in the world by deploying Amazon CloudFront in front of your application. Your origin servers can be Amazon S3, Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), or a custom server outside of AWS. You can also enable AWS Shield Advanced directly on an Elastic IP or Elastic Load Balancing (ELB) in the following AWS Regions - Northern Virginia, Northern California, Ohio, Oregon, Ireland, London, Frankfurt, Stockholm, Singapore, Sydney, Seoul and Tokyo
https://aws.amazon.com/shield/features/
[img[https://kspyhome.files.wordpress.com/2019/06/screenhunter-3077.jpg]]
[img[https://kspyhome.files.wordpress.com/2019/06/screenhunter-3082.jpg]]
[img[https://kspyhome.files.wordpress.com/2019/06/screenhunter-3084.jpg]]
[img[https://kspyhome.files.wordpress.com/2019/06/screenhunter-3093.jpg]]
[img[https://kspyhome.files.wordpress.com/2019/06/screenhunter-3095.jpg]]
[img[https://kspyhome.files.wordpress.com/2019/06/screenhunter-3096.jpg]]
[img[https://kspyhome.files.wordpress.com/2019/06/screenhunter-3097.jpg]]
[img[https://kspyhome.files.wordpress.com/2019/06/screenhunter-3098.jpg]]
[img[https://kspyhome.files.wordpress.com/2019/06/screenhunter-3099.jpg]]
[img[https://kspyhome.files.wordpress.com/2019/06/screenhunter-3100.jpg]]
[img[https://kspyhome.files.wordpress.com/2019/06/screenhunter-3101.jpg]]
[img[https://kspyhome.files.wordpress.com/2019/06/screenhunter-3102.jpg]]
[img[https://kspyhome.files.wordpress.com/2019/06/screenhunter-3103.jpg]]
[img[https://kspyhome.files.wordpress.com/2019/06/screenhunter-3104.jpg]]
[img[https://kspyhome.files.wordpress.com/2019/06/screenhunter-3105.jpg]]
C4 instances are optimized for compute-intensive workloads and deliver very cost-effective high performance at a low price per compute ratio.

Features:

High frequency Intel Xeon E5-2666 v3 (Haswell) processors optimized specifically for EC2
Default EBS-optimized for increased storage performance at no additional cost
Higher networking performance with Enhanced Networking supporting Intel 82599 VF
Requires Amazon VPC, Amazon EBS and 64-bit HVM AMIs
Model	vCPU*	Mem (GiB)	Storage	Dedicated EBS Bandwidth (Mbps)	Network Performance
c4.large	2	3.75	EBS-Only	500	Moderate
c4.xlarge	4	7.5	EBS-Only	750	High
c4.2xlarge	8	15	EBS-Only	1,000	High
c4.4xlarge	16	30	EBS-Only	2,000	High
c4.8xlarge	36	60	EBS-Only	4,000	10 Gigabit
All instances have the following specs:

2.9 GHz Intel Xeon E5-2666 v3 Processor
Intel AVX†, Intel AVX2†, Intel Turbo
EBS Optimized
Enhanced Networking†
Use Cases

High performance front-end fleets, web-servers, batch processing, distributed analytics, high performance science and engineering applications, ad serving, MMO gaming, and video-encoding.
[[aws1001]]
[[aws1002]]
[[aws1003]]
[[Ylinks]]
[[Ylinks - misc]]
[[Create Linux Instance]]
[[kindle book]]
[[links]]
[[articles]]
D2 instances feature up to 48 TB of HDD-based local storage, deliver high disk throughput, and offer the lowest price per disk throughput performance on Amazon EC2.
Features:

High-frequency Intel Xeon E5-2676 v3 (Haswell) processors
HDD storage
Consistent high performance at launch time
High disk throughput
Support for Enhanced Networking
Model	vCPU*	Mem (GiB)	Storage (GB)	Network Performance
d2.xlarge	4	30.5	3 x 2000 HDD	Moderate
d2.2xlarge	8	61	6 x 2000 HDD	High
d2.4xlarge	16	122	12 x 2000 HDD	High
d2.8xlarge	36	244	24 x 2000 HDD	10 Gigabit
All instances have the following specs:

2.4 GHz Intel Xeon E5-2676 v3 Processor
Intel AVX†, Intel AVX2†, Intel Turbo
EBS Optimized
Enhanced Networking†
Use Cases

Massively Parallel Processing (MPP) data warehousing, MapReduce and Hadoop distributed computing, distributed file systems, network file systems, log or data-processing applications.
[[Computer POST and beep codes|https://www.computerhope.com/beep.htm]]
[[If your PC does not turn on anymore, try this|https://www.ghacks.net/2016/04/22/if-your-pc-does-not-turn-on-anymore-try-this/]]
[[Laptop does not start. Fixing the problem.|http://www.laptoprepair101.com/fixing-startup-problem/]]
[[Computer Has a Black Screen in Windows and Will Not Boot or Start Up|http://tips4pc.com/computer-problems/computer_has_a_black_screen_and.htm]]
[[Dell XPS|https://en.wikipedia.org/wiki/Dell_XPS#XPS_630]]
[[Computer will not boot to Windows 10|https://www.dell.com/support/article/au/en/aubsd1/sln297926/computer-will-not-boot-to-windows-10?lang=en]]
[[Windows is taking very long to boot to login screen|https://www.tenforums.com/performance-maintenance/49381-windows-taking-very-long-boot-login-screen.html]]
[[How to Fix a Black Screen in Windows 10|https://www.groovypost.com/howto/fix-black-screen-windows-10/]]
[[Why Does My Screen Go Black After the Windows Screen When I Boot?|https://askleo.com/why_does_my_screen_go_black_after_the_windows_screen_when_i_boot/]]
[[How to Fix Windows 10 Slow Boot/Startup After Update 2018|https://www.easeus.com/partition-manager-software/how-to-fix-windows-10-slow-boot-after-update.html]]
[[How to change startup programs|https://www.digitaltrends.com/computing/how-to-change-your-startup-programs/]]
[[Windows 7 Startup|https://www.techsupportalert.com/content/windows-7-startup.htm]]
F1 instances offer customizable hardware acceleration with field programmable gate arrays (FPGAs).
Instances Features:

High frequency Intel Xeon E5-2686 v4 (Broadwell) processors
NVMe SSD Storage
Support for Enhanced Networking
FPGA Features:

Xilinx Virtex UltraScale+ VU9P FPGAs
64 GiB of ECC-protected memory on 4x DDR4
Dedicated PCI-Express x16 interface
Approximately 2.5 million logic elements
Approximately 6,800 Digital Signal Processing (DSP) engines
FPGA Developer AMI
Model	FPGAs	vCPU	Mem (GiB)	SSD Storage (GB)	Networking Performance
f1.2xlarge	1	8	122	470	Up to 10 Gigabit
f1.4xlarge	2	16	244	940	Up to 10 Gigabit
f1.16xlarge	8	64	976	4 x 940	25 Gigabit
For f1.16xlarge instances, the dedicated PCI-e fabric lets the FPGAs share the same memory space and communicate with each other across the fabric at up to 12 Gbps in each direction. The FPGAs within the f1.16xlarge share access to a 400 Gbps bidirectional ring for low-latency, high bandwidth communication.

All instances have the following specs:

2.3 GHz (base) and 2.7 GHz (turbo) Intel Xeon E5-2686 v4 Processor
Intel AVX†, Intel AVX2†, Intel Turbo
EBS Optimized
Enhanced Networking†
Use Cases

Genomics research, financial analytics, real-time video processing, big data search and analysis, and security.
[[ Getting Start with AWS|https://read.amazon.com.au/?asin=B007X6SMD6]]
[[Amazon EC2 Instance Types|https://aws.amazon.com/ec2/instance-types/]]
[[AWS Q&A |https://chercher.tech/aws-certification/aws]]
''AWS Architecting Solutions''
---------------------------------------------
''Mentor Intros ''
Scott Farrell 
• 3 AWS certifications - ACP, ASA , ASOA 
• Ba Commerce - major in Accounting 
• Masters of management - major in innovation 
• Background in sysadmin, development, consulting
pg02
---------------------------------------------
''Basic Information – Contacts ''
* Forums 
** The go-to point for all questions. A safe, collaborative space. 
** be on your best behaviour 
* Subject mentors 
** questions about content 
* Admin team (IT Masters) admin@itmasters.edu.au 
** Extensions, subject selection, credits, special consideration
pg03
---------------------------------------------
''Basic Information – Subject Style ''
*AWS focus / hands-on focus 
*Class time will also cover some theory and extension topics 
*Pre reading is expected. 
**lecture is not a summary of the text, it is assumed knowledge 
*up and coming deliverables 
*extension topics 
**beyond the text, real examples, case studies 
*discussion - Q&A
pg04
---------------------------------------------
''Housekeeping''
• eBooks 
• eExams
pg05
---------------------------------------------
''Materials ''
• https://learn.itmasters.edu.au 
• Forums 
• Weekly topic guides 
• Resources (slides, homework, readings)
pg06
---------------------------------------------
''Materials: Textbook ''
• Access via kindle. Can also view in a browser. 
• Exam focus on the textbook, and slides.
pg07
---------------------------------------------
''LABS ''
*LABS are intended to give you some hands on experience. 
*the LABS also lead to questions in class, or for the forums. 
*I will not be demo’ing material covered in the labs (as that is what the 
labs are for), I’ll use the class time to cover topics not well addressed 
elsewhere.
*organised by AWS 
*you’ll need to use your own AWS ‘free account’ 
**I think they still require a credit card to sign up. 
*try to turn off any resources after your lab. It conserves ‘free usage’ for later. 
**ec2 instances 
**volumes 
**elastic IPs. 
**RDS instances
*hands on is important for learning 
*some of the labs have some good detail 
**for example powershell scripts as ‘user data’ 
*I am not here to teach Windows/Linux 
**if the lab says Windows, and you use linux at work , you can 
choose either. (my bent is linux - so don’t ask me about windows). 
**The lab only needs to mostly work to provide the hands on. if it’s 
not 100% accurate that’s fine.
pg8 , 9, 10
---------------------------------------------
''Quick Recap: What is Cloud? ''
• On-demand access to a shared pool of computing resources 
• (Near) infinite scalability 
• Self-service 
• Pay as you go 
• ''Separation of responsibilities''
---------------------------------------------
''Cloud / AWS is not cost saving ''
• if you are working on a design, don’t make the ‘cheapest’ choice 
• You need to choose solutions that are ‘right sized’, and ‘technology fit’ 
• If you are in a sprint race, don’t choose gum boots because they are 
on special. Not only will you get the answer wrong more often, you’ll 
look silly at the start line. 
• AWS / cloud is rarely the cheapest solution if viewed as a single 
component.
---------------------------------------------
''Intro to EC2 ''
• Definition: A service that provides secure, resizable compute capacity 
in the cloud. It is designed to make web-scale cloud computing easier 
for developers. 
• Alt definition: EC2 is a service that allows people to rent virtual 
machines that run in Amazon’s data centres. 
• AWS terminology for a virtual machine is an “instance” 
• Industry term VPS is analogous to instance
---------------------------------------------
''EC2 as Just a Virtual Machine ''
*AWS says: “EC2 instances are ephemeral.” But this isn’t a requirement. 
The “instances” can perform traditional role – replacing onpremise/
VPS/co-lo servers 
**EBS for block storage 
***A virtual SAN, up to 20,000 IOPS and 500MB/s 
***Fault tolerant 
***More on EBS in week 2 
**Instance recovery 
**Uptime/machine lifetime is anecdotally “good” 
**Self-service provisioning - important for availability 
*This usage is very common
pg14
---------------------------------------------
''Demo: Provision and Connect to EC2 ''
• run through a lab 
• create a linux VM
---------------------------------------------
''EC2 as it’s Meant to be Used ''
*No SPOF (single point of failure) 
*Scale horizontally, not vertically. 
*Elastic 
*Treat machines as disposable / design for failure 
**Netflix chaos monkey https://medium.com/netflix-techblog/the-netflixsimian-
army-16e57fbab116 
*Treat data centres (availability zones) much the same
pg16
---------------------------------------------
''Regions/Availability Zones (Briefly) ''
• Spread instances across multiple AZs 
• Sydney AWS outage 2016 https://www.crn.com.au/news/aws-sydneysuffers-
outage-420476 
• https://aws.amazon.com/message/4372T8/ 
• Note the second-last paragraph. A bit of an “I told you so.” They said: “Customers 
that were running their applications across multiple Availability Zones in the Region 
were able to maintain availability throughout the event” 
• Average latency between AZs is 2ms https://www.quora.com/What-aretypical-
ping-times-between-different-EC2-availability-zones-within-thesame-
region
pg18
---------------------------------------------
''Extension - AWS/ec2 vs ??? comparison ''
*If you compare the basics, ec2 instance vs VPS, RDS vs installing your own DB 
server, AWS is 
**overly complicated/complex 
**over priced 
*consider that you get ‘best practice’ at each step 
*RDS is not a database server, it’s a tuned DB server with backup and a DB 
administrator 
*an ec2 instance is infrastructure as software, not a VPS. 
*compare a VPS (single VM) to an autoscaling web platform
pg20
---------------------------------------------
''Extension - Infrastructure as software ''
*AWS is a LOT more than browser driven ec2 instances 
*software has a great innovation rate 
**anything that can be performed in the web console, you can code using the API 
**good handling of versions, source control, test, release 
**quickly build, deploy, upgrade, release infrastructure. 
*software can control the hardware. A good example is spinning up ec2 
instances for each report request. A report takes 50 core minutes. How 
many reports can you create in any given 5 minute period.
pg21
---------------------------------------------
Questions?
---------------------------------------------
''AWS Solutions Architect''

Week 2: Storage and Networking
*Storage
**EBS
**S3
**EFS
*Networking
**VPC
pg01
---------------------------------------------
''Labs''

*some issues with instance availability
*other issues ?
pg02
---------------------------------------------
''Storage Overview''

*We are reviewing data at rest
*Understand the options
*Match to requirements
pg03
---------------------------------------------
''Options Matrix''

file based
or block based

|File Based|Block Based|
|S3|EBS|
|EFS|Local / Instance Store|

pg04
---------------------------------------------
''File Storage''


•Store / Retrieve entire file at a time -amazon call this object store
•Single file handle
•Sequential Access
•Limited Random access
•More like a file server, NAS server, ftp server.
•Good for backups, large files
•Good for many servers/EC2 instances access simultaneously 
•S3 and EFS are file storage systems


pg05
---------------------------------------------
''Block Storage''


•Physical disks are block storage devices
•More like a c: drive
•They need to be ‘formatted’ for use. Usually NTFS or ext4.
•Allows random access to files
•allows multiple requests at once
•generally more flexible
•good for databases , lots of small files.
•Local/Instance store and EBS are examples of block storage

pg06

---------------------------------------------
''Options Matrix''

-file based
-or block based

|File Based|Block Based|
|S3|EBS|
|EFS|Local / Instance Store|
pg07
---------------------------------------------
''S3 Overview''


•Object/File storage
•Higher level, abstracted from the blocks
•Programmatic, API based 
•Many simultaneous clients
•not a shared filesystem, but has shared access
•Nearly unlimited scale
•SUPER reliable and resilient
•Pay per GB ''used''($0.023 USD/GB) (¼ price of EBS)

pg08
---------------------------------------------
''S3 Limits''

•Object size: 5TB
•Object count: Unlimited
•Bucket count: 100 / Unlimited
•Throughput/partitioning 
http://docs.AWS.amazon.com/AmazonS3/latest/
dev/request-rate-perf-considerations.html

pg09
---------------------------------------------
''S3 Security''
•very granular/specific. 
•authorisean instance -using IAM roles. Good for web 
applications.
•authorisean application -IAM user with roles, secret key
•pass the security right down to the end user. The end 
user requests the S3 URL. This requires STS -federated 
access. Good for enterprise environments.
•sign a specific URL. security checked by application, and 
URL signed. So anyone with that URL can download the 
file. 
pg10
---------------------------------------------
''S3 demo''

*you can use browser
*there is an API to code against
**therefor loads of 3rd party apps can connect 
   directly. lots of backup apps connect directly.

AWSs3 ls sfarrell
AWSs3 cp testfile.txt s3://sfarrell/
AWSs3 cp s3://sfarrell/testfile.txt -
wget-O -https://sfarrell.s3.amazonAWS.com/testfile.txt
AWSs3 presigns3://sfarrell/testfile.txt

pg11
---------------------------------------------
''EFS Overview''

*NFS store (Linux/Unixnetwork file share)
**Distributed file system
**Many simultaneous instances can connect
**shared filesystem
**FSX newer windows based -not likely in exam
*Highly available
*highly scalable
*sluggish for smaller files, locking is slow.
*Pay per GB ''used''($0.30 USD/GB) (13x price of S3)
*Pay for throughput

pg12
--------------------------------------------
Insta''nce store''

*local storage
*physically connected to instance
*can be easily lost/destroyed
**failover event
**instance stopped
**cannot snapshot
*good for temporary files or swap files
*often SSD and very fast
*free

pg13
---------------------------------------------
''Storage Performance''

*bandwidth
**how much can be read/written per second
**sometimes measured per user, or aggregated
**usually measured with serial reads/writes
*IOPS
**Input Output Operations per Second
**read/write requests per second
**usually measured as random requests
*there is also cross over between each, queue depths, etc.

pg14

---------------------------------------------
''EBS''


•block storage, not shared
•easily backed up with snapshots
•pay per size 
•SSD and HDD
•GP2 should be your ‘go to’ default.
•provisioned IOPS for high/reliable IOPS
•different instances different bandwidth, IOPS similar though
•https://docs.AWS.amazon.com/AWSEC2/latest/UserGuide/
EBSVolumeTypes.html

pg15

---------------------------------------------
''EBS -GP2 IOPS''


•credits system
•100 baseline IOPS
•3 IOPS per Gbyteof storage added to baseline
•3000 burst IOPS
•100 IOPS is the same speed as an old 7200rpm HDD 
….. slow slow slow …. it will feel like the server is 
broken/stopped. 
•careful if you leave the windows swap file on the EBSgp2 volume

pg16
-------------------------------------------
''EBS -provisioned IOPS''


•anything that needs more reliable IOPS
•eventually you can saturate the bandwidth with 
IOPS
•sometimes multiple smaller volumes RAIDedtogether helps
•sometimes better design helps (e.g.horizontal 
scaling).
•higher IO instances

pg17
---------------------------------------------
''Sharing''

|Shared|Not Shared|
|S3|EBS* can be share with Windows sharing|
|EFS|Local / Instance Store|

pg18

---------------------------------------------
''Use Cases -S3''


•corporate file server replacement
•can be used to sync like dropbox
•image / media assets for websites
•archival storage
•any files that are large, greater than a few GBs its a good 
case to have them on S3
•files that need to be shared S3 is sometimes a solution
•backups
•super reliable

pg19
---------------------------------------------
''Use Cases -EBS''

•boot / OS drives
•checkout (git clone) code, and execute
•software installation
•distributednode storage
•medium reliability -but easy to back up with snapshots
•*databases ??? -should use RDS instances
•high IOPS requirements


pg20
---------------------------------------------
''Use Cases -EFS''

•horizontal scaling needs access to shared code/media
•some can be supported by S3 (media, images). but other requirements cannot
•sometimes checking code out locally to EBSworks, sometimes using code from EFS is appropriate
•general place to keep shared scripts, and management code
•medium/low reliability

pg21

---------------------------------------------
''Use Cases -instance store''


•temporary files
•swap space
•checked out code to execute
•least reliable


pg22
---------------------------------------------
''Labs''


•quicklabs
•more free than free
•all assets/data/instances expire after the expires
•locked -so don'tchange region, don'texpect to be able to use features not in the lab


pg23
---------------------------------------------
''Questions?''


And assessment discussion


---------------------------------------------
AWS Solutions Architect


Week 3: Databases

pg 01
---------------------------------------------
''RDS (Relational Database Service)''


*Managed SQL installation: EC2 + database software under the hood:
**MySQL
**PostgreSQL
**Microsoft SQL
**Oracle
**MariaDB
**Aurora

*Managed = as if it includes a database administrator
**Automatic updates -application and operating system
**Tuned -but you can make some changes
**secured
**Automatic backups, logs, rollbacks, snapshots, storage
**Automatic DR
**HA option
**No SSH access

Pg02
---------------------------------------------
''RDS Advanced''

*Auto DR is approx10 minutes
*HA -Multi-AZ
**Twice the price
**Secondary instance is a standby
**Failover time roughly 2 minutes (Measured: 
http://giuppo.github.io/aws-rds-production-downtime/)
**Failover used for regular maintenance
*Read replicas

Pg03

---------------------------------------------
''RDS Aurora''

*Drop-in replacement for MySQL and Postgres*
*Zero-downtime patching
*Up to 5x performance
*Failover time is 15-60 seconds
*Dynamic storage, delta cloning
*Only up to MySQL v5.6 compatibility

pg04
---------------------------------------------
''A resilient, durable architecture''
[img[https://kspyhome.files.wordpress.com/2019/07/screenhunter-4182.jpg]]

Picture 4
Amazon RDS database instances:
Master and Multi-AZ standby


Application, in Amazon EC2 instances


Elastic Load Balancing load balancer 
instance


Picture 8
Picture 16
Picture 18
DB snapshots in 
Amazon S3


Picture 21
Picture 22
Picture 23
Picture 24
---------------------------------------------
''But I can install my database on ec2 -and it’s cheaper''

*No management / support
*DIY backups, logs , rollbacks
*Pay for a DB administrator
*Now you have to work out how to scale yourself
*How about failover?
*Is it still cheaper?
*Poor advice to your client, you’ll wear the failure
*You’ll get the answer wrong in exams

pg06
---------------------------------------------
''Database scalability''

*Vertical -larger instance
*Horizontal -read replicas
*Faster tech -Auroravs MySQL
*Traditional performance tuning 
**Indexes
**Caching
*But there is limits -you can end up with a lot of work trying to scale RDS
*Databases don’t scale as easily as web/php/java
*At least start with Aurora-most scalable SQL
*Now AWS has Aurora Serverless 
**https://aws.amazon.com/rds/Aurora/serverless/
**Auto scaling Aurora
*Think about
**Low scale / high scale / cloud scale
**Match to appropriate technology

---------------------------------------------
''Database scalability reached -what now?''

*Simplify the application?
*Cloud scale -e.g.A mobile game, a million users, 
needing to access their account.
**Traditionally you’d write custom code to 
cache this, and ‘look-aside’ from your main 
app, but store the ‘truth’ in a database.
*Alternatively start with an 
environment/database built for cloud scale
pg09
---------------------------------------------
''SQL vs NoSQL''
[img[https://kspyhome.files.wordpress.com/2019/07/screenhunter-4183.jpg]]

---------------------------------------------
''DynamoDB –AWS’ key-value+store''

*''Engineered for throughput and resilience''
**Typical operation (e.g. insert, update, delete) takes < 10ms.
**Synchronous replication over multiple AZ’s (3).
**BUT replication may take > 10ms, so reads are eventually consistent (EC) by default
*''Provisioned by throughput (reads/writes) not storage/iops''
**1 WCU (Write Capacity Unit): <=1KB write/sec (3600 writes/hr; ~2.5M writes/mon)
**1 RCU (Read Capacity Unit) = 4KB reads/sec (IRL: 2 EC or 1 strongly consistent)
**Can adjust (scale) CU on a table/index based on CloudWatch metrics or manually

*''Priced by throughput (plus ~$0.25/GB/month for storage)''
**100 WCU in ap-southeast-2 = ~$53/month 100 RCU = ~$106/month


pg11

---------------------------------------------
Demo: DynamoDB


•Create a table
•Add a record to the table from a python app
•View the resulting record



---------------------------------------------
{{{
Demo


table : game1 gamerID, name , score

awsdynamodblist-tables --region us-east-1

awsdynamodbdescribe-table --table-name game1 --region us-east-1 

cat query.json

{

"gamerID": {"S": "1"}

}

awsdynamodbget-item --table-name game1 --key file://query.json --region us-east-1

#!/bin/bash

awsdynamodbput-item \

--table-name game1 \

--item '{ 

"gamerID": {"S": "2"}, 

"name": {"S": "Peter"} , 

"score": {"S": “5”},

"bonus": {"S": “2”}

}' \

--return-consumed-capacity TOTAL --region us-east-1

}}}
---------------------------------------------
DynamoDB –General features


•Data types 
•Simple/scalar: number, string, boolean, binary, null (unset)
•Sets: string, number and binary sets
•Complex: Lists, maps (i.e. JSON; limited to 32 levels of nesting)



•Schema-less: Can add/remove (unindexed) attributes at any time. 
•Max record size: 400KB. Note that inserting this maximum would consume 400 
WCU(!!)



•[Advanced]
•Triggers: Uses stream events, works with Lambda.
•Atomic ops: number inc/decrement; set/list/map add/remove element
•Expressions: Per-request server-side constraints on attributes (scoped 
to this object)
•Published protocol: Digitally signed JSON-over-HTTP Requests..






---------------------------------------------
DynamoDB –What’s different


•Indexes (must be number, string or binary)
•Single Primary Key per table. Partition/hash value and optional range 
value 
•If an attribute is not indexed, you’ll need a table scan. Or an index.





Not a relational DB: No in-built referential 
integrity

•Two types of index:
•LSI (Local Shared Index) has same hash/partition key, but different range 
key
•GSI (Global Shared Index) uses a different hash/partition key. . Cost 
implications.






---------------------------------------------
No SQL –Why?


•Scaling is only part of the answer
•Matching database to code
•OO (Object Oriented) code matches Dynamo DB
•OO code models the business, shaped like the business
•no SQL models the code, shaped like the code



•Dynamic data structure
•‘Collections’ based
•Code creates the database as it goes
•Document based vs records. (like Lotus Domino)



---------------------------------------------
No SQL –What’s wrong with SQL?


•Traditional relational databases are rigid
•It’s in the name -structured -for a structured / ordered / defined 
world
•SQLtables/schema changes need to be matched to code releases
•Variants generally require schema changes
•you can subvert a table, with key/key/value pairs (now it looks like 
no SQL)



•As business requirements change, the SQLtables feel more rigid and 
restrictive.
•Perfect for accounting, not so perfect for social media engagement



---------------------------------------------
No SQL –Where?


•Start ups
•New applications
•Short lived applications/business cases
•High scale requirements
•Where data doesn’t need to be compared to each other, 
aggregated, summed, graphed.
•Agile / rapid / flexible / iterative / evolving -matching tech 
to ideas/people/business
•Biggest fail of IT is to match requirements. no SQL assumes 
requirements are fluid



---------------------------------------------
No SQL –Historically


•IBM Lotus Notes / Domino was a no-SQLdatabase
•Big in the 90s
•Match GUI features/field to database fields
•Business / people focused
•Used the concept of a business document being routed 
between staff
•https://en.wikipedia.org/wiki/IBM_Notes
•‘Programming[edit]


IBM Notes and Domino is a cross-platform, distributed document-oriented NoSQLdatabase and messaging 
framework and rapid application development environment that includes pre-built applications like email, 
calendar, etc.’


---------------------------------------------
Questions?


And assessment discussion


---------------------------------------------
''AWS Solutions Architect 
Week 4: Security 
Protecting your cloud deployment 
IAM (Identity and Access Management)''
---------------------------------------------------
''Protecting your cloud deployment ''
• internet nasties 
• automated hacking 
• DDoS 
• application hacking <-- hole
• auhorised user access
pg2
gang hacking
---------------------------------------------------

''AWS security ''
*Several Levels 
**DDoS mitigation 
***AWS shield / advanced, AWS WAF, AWS marketplace - deep inspection/IPS 
** infrastructure 
***VPC, security groups, NACLs - think of this as a traditional firewall 
***good backups 
**access control 
***AWS IAM 
**logging - AWS cloud front
pg3
---------------------------------------------------
''AWS security - DDoS mitigation ''
*AWS shield
**networks can be hacked - AWS shield protects the network 
**protects against layer 3/4 attacks 
**basic is free, all customers get AWS shield. This is a HUGE value to customers from AWS. 
**advanced is USD$3000/month. for large corporates.
pg4
---------------------------------------------------
''AWS security - DDoS mitigation ''
*AWS WAF - web application firewall
**applications can be hacked - a WAF protects an application 
**protects against layer 7 / web attacks, 
**repeated login attempts, rate based urls 
**sql injection 
**price per rule , plus small fee per URL request 
**managed rules - from marketplace providers 
**https://aws.amazon.com/marketplace/solutions/security/waf-managedrules

pg5
---------------------------------------------------
''AWS security - marketplace ''
*marketplace 
**managed AWS WAF rules 
**security appliances - from most popular vendors (probably EC2 instance)
**examples on next slide
pg6
---------------------------------------------------
''AWS security - infrastructure ''
*nfrastructure - ec2 instances 
**ec2 instances can be hacked, the operating system can be hacked 
**security groups, NACLs can protect your ec2 instances 
**very similar to a traditional firewall 
**allow ip/port/protocol combinations (ports 443 ....
**can use ‘security groups’ names as source/destinations (wbe server, database server 3306)
**break up your infrastructure into different subnets in your VPC
**apply different rules to different subnets
pg8
---------------------------------------------------
''IAM Overview ''
*Manage access to your AWS account/AWS services 
*Users 
*Groups 
*Policies 
**Are versioned JSON documents that describe permissions 
**Are assigned to users, groups or roles 
*Roles
pg9
---------------------------------------------------

''IAM basics''
[img[https://kspyhome.files.wordpress.com/2019/07/screenhunter-4173.jpg]]
pg10
----------------------------------------------------
JSON documents...
''Anatomy of a Policy ''
{ 
"Version": "2012-10-17”, 
"Statement": { 
"Effect": "Allow", 
"Action": "s3:ListBucket", 
"Resource": "arn:aws:s3:::ite531-exam-answers” 
} 
}
[img[https://kspyhome.files.wordpress.com/2019/07/screenhunter-4176.jpg]]
pg11
-----------------------------------------------------
''Access keys ''
*Access key ID / Secret access key 
*Attached to a user 
*Enable/disable, create/delete (rotate) 
*I hate them 
**get left lying around 
**can’t work out what they are 
**don’t know if they are valid
pg12
-----------------------------------------------------
''AWS CLI ''
*Command line version of AWS web console 
*Uses AWS API behind the scenes 
*Authenticates using IAM access key 
*Installed by default on EC2 instances 
*Install elsewhere: 
**pip install --upgrade --user awscli 
**aws configure 
*Usage example: List all S3 buckets in an account 
**aws s3 ls
pg13
-----------------------------------------------------
''IAM Roles ''
*Like a user, but without a password/access key 
*For: 
**Cross-service AWS authorisation (e.g., allow your Lambda function to access a 
	  DynamoDB table, or allow your EC2 instance to access an S3 bucket) 
**Federated access 
**allocate to an ec2 instance
pg14
-----------------------------------------------------
''Demo: S3 access policy ''
• Create an ec2 instance 
• use aws-cli to list buckets … fails 
• create policy, assign to role, assign to instance 
• now retry list buckets
   (( Amazon Linux 2 AMI instance....
pg15
-----------------------------------------------------
''secure AWS root credentials ''
*https://docs.aws.amazon.com/IAM/latest/UserGuide/id_rootuser.html 
*enable MFA on AWS root account 
*only use root account to create initial users 
*delete / disable root access/secret keys - forces you to use other users.
pg16
-----------------------------------------------------
''Questions? ''
And assessment discussion
Exam:
  * Slides
  * text book
  * produt page and faq
pg17
-----------------------------------------------------
''Demo: wp-hosting-performance-check ''
* https://wordpress.org/plugins/wp-hosting-performance-check/ 
* distributed load test for WordPress. 
* challenges: 
** enough CPU power to run the test -25,000 simulated users. 
** vertical scaling to 72 cores enough for roughly 50% of requirements 
** ran into challenges with open tcp sockets, 5 sockets per simulated user. 
** horizontal scaling, more cores, more tcp sockets
pg18
-----------------------------------------------------
Demo: wp-hosting-performance-check - design 
* uses ec2 spot instance fleet 
**aggregates CPU, network, tcp sockets 
** launches right sized fleet for task 
* dreaded golden image 
** slaves use userdata and wget to load test data, upload results when done, halt when finished 
** combine results on master 
* VPC trusted for bootstrap and results reporting, securitygroups 
* ec2 instance trusted for ec2 stop/start
pg19
-----------------------------------------------------
''Demo: wp-hosting-performance-check - components ''

{{{

aws ec2 request-spot-fleet --region us-east-1 —spot-fleet-request-config file://this_file.json 
{ 
"SpotPrice": "0.01", 
"TargetCapacity": 10, 
"IamFleetRole": "arn:aws:iam::49xxxx09:role/aws-ec2-spot-fleet-tagging-role", 
“Type": “request", 
"TerminateInstancesWithExpiration": true, 
"LaunchSpecifications": [ 
{ 
"ImageId": "ami-axxxx0d0", 
"KeyName": “xxx2", 
"SecurityGroups": [ 
{ 
"GroupId": “sg-xxxxac84" 
} 
], 
"InstanceType": "m3.large", 
"UserData": “IyEvYmluL2Jhc2gKcm0gLWYgL2V0Yy9yYy5sb2NhbAojL2V0Yy9yYy5sb2NhbAojcn…. snip …ZHRlc3QvCmhhbHQK”, 
"SubnetId": "subnet-xxxx39, subnet-xxxadc, subnet-9xxxcf5, subnet-xxxx8d3" 
} 
] 
}
}}}
pg20
-----------------------------------------------------
''Demo: wp-hosting-performance-check - components ''
user-data 
echo template | sed /search/replace | base64 
#!/bin/bash 
logname=###SECRET###.`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id`.log 
timeout ###TIMEOUT### /loadtest/apache-jmeter-4.0/bin/jmeter -n -t /loadtest/job.jmx -l /loadtest/${logname} 2>&1 >> /loadtest/jreport.log 
scp /loadtest/${logname} loadtestserver:/loadtest/ 
halt
[lmg[https://kspyhome.files.wordpress.com/2019/07/screenhunter-4177.jpg]]
pg21
-----------------------------------------------------
''Demo: wp-hosting-performance-check - components ''
IAM policy for the master 
allocated to the running ec2 master instance 
{{{
{ 
"Statement": [ 
{ 
{
"Action": [ 
"ec2:RequestSpotFleet", 
"ec2:CancelSpotFleetRequests" 
], 
"Effect": "Allow", 
"Resource": "*" 
} 
], 
"Version": "2012-10-17" 
}
}}}
pg22
-----------------------------------------------------
''Demo: wp-hosting-performance-check - outcomes'' 
*Vertical scaling was quicker to launch, around 20 seconds 
** used ec2 ec2:Start*”, “ec2:Stop*", "ec2:ModifyInstanceAttribute","Resource": 
“arn:aws:ec2:::instance/i-0b0xxxx92faa” 
**c5 instances did boot about 5 seconds quicker 
*spot instances fleet takes around 60-70 seconds to launch. 
* needed a sleep in startup scripts as cloud-init by amazon is very busy. plus tar c files 
>/dev/null - to read files before using, as they are lazy loaded. 
* at 100+ instances - needed to be very light touch on the master. needed to exclude rsync, and motd for ssh. 
* golden images still a pain, round trip for small changes is slow
pg23
-----------------------------------------------------
''Week 4: Extra slides you may find helpful 
• ARNs – Amazon Resource Names 
• Versions vs versions 
• The .aws/credentials file (multiple credentials for AWS CLI)''
pg24
-----------------------------------------------------
''ARNs – Amazon Resource Names ''
• Identifier for AWS resources 
• Supports wildcards and implicit “don’t cares” (i.e. groups of matching resource 
instances) but VERY specific to resource in question. 
http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html
pg25
-----------------------------------------------------
[img[https://kspyhome.files.wordpress.com/2019/07/screenhunter-4174.jpg]]
pg26
-----------------------------------------------------
''ARN Examples (1) ''
*S3 arn:aws:s3:::bucket_name[/object_name] 
**A bucket: arn:aws:s3:::potr_backups 
**An object: arn:aws:s3:::potr_backups/mikec/20170701/marks.doc 
**Variables (policy version 2012-10-17): arn:aws:s3:::potr_backups/${aws:username}/ 
*DynamoDB arn:aws:dynamodb:region:account-id:table/tablename[/index/index_name] 
** A table: arn:aws:dynamodb:us-east-2:1234567890:table/surveys 
**An index: arn:aws:dynamodb:us-east-2:1234567890:table/surveys/index/by_user 

TIP: Granting a permission on a DynamoDB table does not automatically grant the same permission to indexes in the table. 
While an index may appear to be “under” the table because it looks like a path in the ARN, that’s purely for naming/grouping – the index is its own resource. If you find that you can query a table, but not an index, then you’ve probably forgotten index permissions…
pg27
-----------------------------------------------------
''ARN Examples (2) ''
*IAM arn:aws:iam::account-id:entity-type[/entity-id] 
Lots of entity types; the examples below are the ones you’re most likely to encounter 
** Account root:arn:aws:iam::1234567890:root 
**IAM user: arn:aws:iam::1234567890:user/mikec 
**Hierachical IAM user: arn:aws:iam::1234567890:user/staff/sysadmin/mikec 
**IAM group: arn:aws:iam::1234567890:group/auditors [hierarchy allowed] 
**All users+: arn:aws:iam::1234567890:user/* 
**All groups+: arn:aws:iam::1234567890:group/* 
**IAM role: arn:aws:iam::1234567890:role/S3delete 

+ Wildcarded groups/users are only valid in the Resources part of a policy document, not as the principle
[img[https://kspyhome.files.wordpress.com/2019/07/screenhunter-4178.jpg]]
pg28 
-----------------------------------------------------
''Notes about policy versions and the version element ''
*The version element in a policy specifies the version of policy language in 
use (i.e. the syntax and semantics for elements of that policy language) 
**Theoretically allows Amazon to develop new policy languages or add extra 
features/refinements to existing policy languages. 
**AWS services (e.g. CloudFormation) will complain if not given a valid version 
*Different to AWS policy versions, which are a retained history of policy 
(current + previous 4) used to roll back “messed up” policy changes. 
**http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managedversioning.html
pg29
-----------------------------------------------------
''Setting up .aws/credentials ''
Read: http://docs.aws.amazon.com/general/latest/gr/aws‐access‐keys‐best‐practices.html first
[default] 
aws_access_key_id = AKxxxxxxxxxxxxxxxxxQ 
aws_secret_access_key = xy123xy123xy123xy123xyetc 
[work] 
aws_access_key_id = AKzzzzzzzzzzzzzzzzzQ 
aws_secret_access_key = ab456ab456ab456ab456abetc 
Profile 
name 
Select/activate profile in current session: 
Bash 
user$ export AWS_PROFILE=work 
CSH & friends 
user% setenv AWS_PROFILE work 
PowerShell 
PS C:\> Set-AWSCredential –ProfileName work 
Read: http://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html first 
For all of these, you will to set a region too, either in the 
AWS_REGION environment variable, or in ~/.aws/config
[img[https://kspyhome.files.wordpress.com/2019/07/screenhunter-4175.jpg]]
pg30
-----------------------------------------------------
http://kosapy.tiddlyspot.com/
http://dell.tiddlyspot.com/
P3 instances are the latest generation of general purpose GPU instances.

Features:

Up to 8 NVIDIA Tesla V100 GPUs, each pairing 5,120 CUDA Cores and 640 Tensor Cores
High frequency Intel Xeon E5-2686 v4 (Broadwell) processors for p3.2xlarge, p3.8xlarge, and p3.16xlarge.
High frequency 2.5 GHz (base) Intel Xeon P-8175M processors for p3dn.24xlarge.
Supports NVLink for peer-to-peer GPU communication
Provides up to 100 Gbps of aggregate network bandwidth within a Placement Group.
Model	GPUs	vCPU	Mem (GiB)	GPU Mem (GiB)	GPU P2P	Storage (GB)	Dedicated EBS Bandwidth  	Networking Performance
p3.2xlarge	1	8	61	16	-	EBS-Only	1.5 Gbps	Up to 10 Gigabit
p3.8xlarge	4	32	244	64	NVLink	EBS-Only	7 Gbps	10 Gigabit
p3.16xlarge	8	64	488	128	NVLink	EBS-Only	14 Gbps	25 Gigabit
p3dn.24xlarge	8	96	768	256	NVLink	2 x 900 NVMe SSD	14 Gbps	100 Gigabit
All instances have the following specs:

Intel AVX, Intel AVX2, Intel Turbo
EBS Optimized
Enhanced Networking†
 
p3.2xlarge, p3.8xlarge, and p3.16xlarge have 2.3 GHz (base) and 2.7 GHz (turbo) Intel Xeon E5-2686 v4 processors.  
 
p3dn.24xlarge has 2.5 GHz (base) and 3.1 GHz (sustained all-core turbo) Intel Xeon P-8175M processors and supports Intel AVX-512.
Use Cases

Machine/Deep learning, high performance computing, computational fluid dynamics, computational finance, seismic analysis, speech recognition, autonomous vehicles, drug discovery.
goodyear#ksaws
!WelcomeToTiddlyspot
This document is a ~TiddlyWiki from tiddlyspot.com.  A ~TiddlyWiki is an electronic notebook that is great for managing todo lists, personal information, and all sorts of things.

@@font-weight:bold;font-size:1.3em;color:#444; //What now?// &nbsp;&nbsp;@@ Before you can save any changes, you need to enter your password in the form below.  Then configure privacy and other site settings at your [[control panel|http://ksaws.tiddlyspot.com/controlpanel]] (your control panel username is //ksaws//).
<<tiddler TspotControls>>
See also GettingStarted.

@@font-weight:bold;font-size:1.3em;color:#444; //Working online// &nbsp;&nbsp;@@ You can edit this ~TiddlyWiki right now, and save your changes using the "save to web" button in the column on the right.

@@font-weight:bold;font-size:1.3em;color:#444; //Working offline// &nbsp;&nbsp;@@ A fully functioning copy of this ~TiddlyWiki can be saved onto your hard drive or USB stick.  You can make changes and save them locally without being connected to the Internet.  When you're ready to sync up again, just click "upload" and your ~TiddlyWiki will be saved back to tiddlyspot.com.

@@font-weight:bold;font-size:1.3em;color:#444; //Help!// &nbsp;&nbsp;@@ Find out more about ~TiddlyWiki at [[TiddlyWiki.com|http://tiddlywiki.com]].  Also visit [[TiddlyWiki.org|http://tiddlywiki.org]] for documentation on learning and using ~TiddlyWiki. New users are especially welcome on the [[TiddlyWiki mailing list|http://groups.google.com/group/TiddlyWiki]], which is an excellent place to ask questions and get help.  If you have a tiddlyspot related problem email [[tiddlyspot support|mailto:support@tiddlyspot.com]].

@@font-weight:bold;font-size:1.3em;color:#444; //Enjoy :)// &nbsp;&nbsp;@@ We hope you like using your tiddlyspot.com site.  Please email [[feedback@tiddlyspot.com|mailto:feedback@tiddlyspot.com]] with any comments or suggestions.
!GettingStarted
To get started with this blank [[TiddlyWiki]], you'll need to modify the following tiddlers:
* [[SiteTitle]] & [[SiteSubtitle]]: The title and subtitle of the site, as shown above (after saving, they will also appear in the browser title bar)
* [[MainMenu]]: The menu (usually on the left)
* [[DefaultTiddlers]]: Contains the names of the tiddlers that you want to appear when the TiddlyWiki is opened
You'll also need to enter your username for signing your edits: <<option txtUserName>>
!MainMenu
[[WelcomeToTiddlyspot]] [[GettingStarted]]

!DefaultTiddlers
[[WelcomeToTiddlyspot]] [[GettingStarted]]
You would like to share some documents with public users accessing an S3 bucket over the Internet. What are two valid methods of granting public read permissions so you can share the documents? (choose 2)   

Options are :

1*Grant public read access to the objects when uploading (Correct)
2*Share the documents using CloudFront and a static website
3*Use the AWS Policy Generator to create a bucket policy for your Amazon S3 bucket granting read access to public anonymous users (Correct)
4*Grant public read on all objects using the S3 bucket ACL
5*Share the documents using a bastion host in a public subnet
Amazon EC2 z1d instances offer both high compute capacity and a high memory footprint. High frequency z1d instances deliver a sustained all core frequency of up to 4.0 GHz, the fastest of any cloud instance.

Features:

A custom Intel® Xeon® Scalable processor with a sustained all core frequency of up to 4.0 GHz
Up to 1.8TB of instance storage
High memory with up to 384 GiB of RAM
Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
With z1d instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the z1d instance
Model	vCPU	Mem (GiB)	Networking Performance	SSD Storage (GB)
z1d.large	2	16	Up to 10 Gigabit	1 x 75 NVMe SSD
z1d.xlarge	4	32	Up to 10 Gigabit	1 x 150 NVMe SSD
z1d.2xlarge	8	64	Up to 10 Gigabit	1 x 300 NVMe SSD
z1d.3xlarge	12	96	Up to 10 Gigabit	1 x 450 NVMe SSD
z1d.6xlarge	24	192	10 Gigabit	1 x 900 NVMe SSD
z1d.12xlarge	48	384	25 Gigabit	2 x 900 NVMe SSD
z1d.metal	48*	384	25 Gigabit	2 x 900 NVMe SSD
*z1d.metal provides 48 logical processors on 24 physical cores

All instances have the following specs:

Up to 4.0 GHz Intel® Xeon® Scalable Processors
Intel AVX, Intel AVX2, Intel Turbo
EBS Optimized
Enhanced Networking†
Use Cases

Ideal for electronic design automation (EDA) and certain relational database workloads with high per-core licensing costs.