Cost Optimization для Serverless проектов:
https://blog.ecrazytechnologies.com/optimize-with-aws-cost-explorer
#serverless #cost_optimization
https://blog.ecrazytechnologies.com/optimize-with-aws-cost-explorer
#serverless #cost_optimization
eCrazy Technologies
Optimize with AWS Cost Explorer
AWS provides us a lavish free tier and generous developer credits. That made me kind of lax with any kind of budgeting. Whatever I do, I knew it would be covered. So I seldom cared for the costs.
My application is 100% serverless, and I was always wi...
My application is 100% serverless, and I was always wi...
Очистка S3 multipart uploads:
https://aws.amazon.com/blogs/aws-cost-management/discovering-and-deleting-incomplete-multipart-uploads-to-lower-amazon-s3-costs/
#S3 #cost_optimization
https://aws.amazon.com/blogs/aws-cost-management/discovering-and-deleting-incomplete-multipart-uploads-to-lower-amazon-s3-costs/
If the complete multipart upload request isn’t sent successfully, Amazon S3 will not assemble the parts and will not create any object. The parts remain in your Amazon S3 account until the multipart upload completes or is aborted, and you pay for the parts that are stored in Amazon S3.#S3 #cost_optimization
Amazon
Discovering and Deleting Incomplete Multipart Uploads to Lower Amazon S3 Costs | Amazon Web Services
This blog post is contributed by Steven Dolan, Senior Enterprise Support TAM Amazon S3’s multipart upload feature allows you to upload a single object to an S3 bucket as a set of parts, providing benefits such as improved throughput and quick recovery from…
Если настраиваете S3 Lifecycle configuration rules для того, чтобы объекты переезжали в Glacier и Glacier Deep Archive, то стоит учесть допиздержки:
▪️
▪️
Это значит, что для мелких файлов - (меньше 10KB для Glacier Deep Archive и 16 для Glacier Deep Archive) стоимость файла на "обычном" S3 получается дешевле (и это не считая ещё траты за Lifecycle Transition requests).
В том числе поэтому S3 Intelligent-Tiering не работает с объектами меньше 128 KB:
The S3 Intelligent-Tiering storage class is suitable for objects larger than 128 KB that you plan to store for at least 30 days. If the size of an object is less than 128 KB, it is not eligible for auto-tiering.
#S3 #cost_optimization
▪️
For each object archived to S3 Glacier or S3 Glacier Deep Archive, Amazon S3 uses 8 KB of storage for the name of the object and other metadata. Amazon S3 stores this metadata so that you can get a real-time list of your archived objects by using the Amazon S3 API. For more information, see Get Bucket (List Objects). You are charged Amazon S3 Standard rates for this additional storage.▪️
For each object that is archived to S3 Glacier or S3 Glacier Deep Archive, Amazon S3 adds 32 KB of storage for index and related metadata. This extra data is necessary to identify and restore your object. You are charged S3 Glacier or S3 Glacier Deep Archive rates for this additional storage.Это значит, что для мелких файлов - (меньше 10KB для Glacier Deep Archive и 16 для Glacier Deep Archive) стоимость файла на "обычном" S3 получается дешевле (и это не считая ещё траты за Lifecycle Transition requests).
В том числе поэтому S3 Intelligent-Tiering не работает с объектами меньше 128 KB:
The S3 Intelligent-Tiering storage class is suitable for objects larger than 128 KB that you plan to store for at least 30 days. If the size of an object is less than 128 KB, it is not eligible for auto-tiering.
#S3 #cost_optimization