S3 - Simple
Storage Service
S3 Consistency Model
·
Read after Write Consistency for Puts
of new objects
o Put
200 , Get 200
o Get
404, Put 200, Get 404
·
Eventual Consistency for deletes and
puts of new objects
o
Versioning
o if
versioning is enabled and the bucket already contains files , than these file
versions are "null"
S3 -> 3500 requests to add data and 5500 to redrive. s3 auto scales
S3 > encryption either SSE AES OR KMS | old way via
headers | S3 access Logs >
S3 pre signed urls
>generate vi sdk or cli
| default 1 hr, inherits your
permissions - get -put
S3 static website >
<bucket-name>.s3-website-<AWS-region>.amazonaws.com
S3 Select > you can use simple structured query
language (SQL) statements to filter the contents of Amazon S3 objects and
retrieve just the subset of data that you need.
S3 Select works on objects stored in CSV, JSON, or
Apache Parquet format. It also works with objects that are compressed with GZIP
or BZIP2 (for CSV and JSON objects only), and server-side encrypted objects.
You can specify the format of the results as either CSV or JSON, and you can
determine how the records in the result are delimited.
Request
Rate and Performance Guidelines
Amazon S3
automatically scales to high request rates. For example, your application can
achieve at least 3,500 PUT/POST/DELETE and 5,500 GET requests per second per
prefix in a bucket. There are no limits to the number of prefixes in a bucket.
It is simple to increase your read or write performance exponentially. For
example, if you create 10 prefixes in an Amazon S3 bucket to parallelize reads,
you could scale your read performance to 55,000 read requests per second.
Random Hex hash prefix would distribute the
load across multiple index
partitions.
S3 Intelligent-Tiering stores objects in two access tiers:
one tier that is optimized for frequent access and another lower-cost tier that
is optimized for infrequent access