MCP сервер для Cost Explorer:
https://github.com/aarora79/aws-cost-explorer-mcp-server
Даёт доступ Claude к Cost Explorer, что позволяет запрашивать и рисовать все расходы с помощью обычных текстовых запросов. Опционально также есть возможность подключения расходов на Bedrock.
По умолчанию работает локально, но можно запустить и в качестве виртуалки на AWS, чтобы использовать роль для доступа.
#MCP #cost_optimization
https://github.com/aarora79/aws-cost-explorer-mcp-server
Даёт доступ Claude к Cost Explorer, что позволяет запрашивать и рисовать все расходы с помощью обычных текстовых запросов. Опционально также есть возможность подключения расходов на Bedrock.
По умолчанию работает локально, но можно запустить и в качестве виртуалки на AWS, чтобы использовать роль для доступа.
#MCP #cost_optimization
11🔥14👍4❤1🤮1
MCP server for Terraform:
https://github.com/nwiizo/tfmcp
◽️ Reading Terraform configuration files
◽️ Analyzing Terraform plan outputs
◽️ Applying Terraform configurations
◽️ Managing Terraform state
◽️ Creating and modifying Terraform configurations
#MCP #Terraform
https://github.com/nwiizo/tfmcp
◽️ Reading Terraform configuration files
◽️ Analyzing Terraform plan outputs
◽️ Applying Terraform configurations
◽️ Managing Terraform state
◽️ Creating and modifying Terraform configurations
#MCP #Terraform
GitHub
GitHub - nwiizo/tfmcp: 🌍 Terraform Model Context Protocol (MCP) Tool - An experimental CLI tool that enables AI assistants to manage…
🌍 Terraform Model Context Protocol (MCP) Tool - An experimental CLI tool that enables AI assistants to manage and operate Terraform environments. Supports reading Terraform configurations, analyzin...
2🤮2❤1👍1😱1
🚀 Join our AWS Workshop: Decoupled Microservices on March 21! 🚀
☁️ Free of charge
☁️ Online
☁️ English
AWS Step Functions is a serverless workflow service that simplifies managing long-running processes and coordinating distributed applications. This workshop explores how Step Functions can implement the Saga design pattern to maintain data consistency across microservices without using Distributed Transaction Coordinators (DTC) or two-phase commits. You'll gain hands-on experience in orchestrating local transactions within a cloud-based architecture.
We will provide a dedicated training account for each participant who registers via the Google Form [https://forms.gle/YnaQS8wyjpkMaWby8]. However, you can also use your personal AWS account.
⚠️ IMPORTANT: Please complete the registration form [https://forms.gle/YnaQS8wyjpkMaWby8] by 21:00 UTC+1 on March 20.
Find out more details and register via the link below:
https://wearecommunity.io/events/workshop-decoupled-microservices
☁️ Free of charge
☁️ Online
☁️ English
AWS Step Functions is a serverless workflow service that simplifies managing long-running processes and coordinating distributed applications. This workshop explores how Step Functions can implement the Saga design pattern to maintain data consistency across microservices without using Distributed Transaction Coordinators (DTC) or two-phase commits. You'll gain hands-on experience in orchestrating local transactions within a cloud-based architecture.
We will provide a dedicated training account for each participant who registers via the Google Form [https://forms.gle/YnaQS8wyjpkMaWby8]. However, you can also use your personal AWS account.
⚠️ IMPORTANT: Please complete the registration form [https://forms.gle/YnaQS8wyjpkMaWby8] by 21:00 UTC+1 on March 20.
Find out more details and register via the link below:
https://wearecommunity.io/events/workshop-decoupled-microservices
❤6👍1
Lambda best practices:
https://aws.amazon.com/blogs/compute/handling-billions-of-invocations-best-practices-from-aws-lambda/
🔸 Stateless functions: Ensure functions do not maintain state between invocations.
🔹 Service over custom code: Utilize AWS services instead of writing custom solutions.
🔸 Decouple components: Minimize dependencies between services to enhance scalability.
🔹 Idempotent operations: Design functions to handle repeated events safely.
🔸 On-demand processing: Process events as they occur, avoiding batch processing.
🔹 Use Step Functions: Consider Step Functions for complex workflows.
🔸 Multiple AWS accounts: Manage quotas and isolation by using separate accounts.
#Lambda
https://aws.amazon.com/blogs/compute/handling-billions-of-invocations-best-practices-from-aws-lambda/
🔸 Stateless functions: Ensure functions do not maintain state between invocations.
🔹 Service over custom code: Utilize AWS services instead of writing custom solutions.
🔸 Decouple components: Minimize dependencies between services to enhance scalability.
🔹 Idempotent operations: Design functions to handle repeated events safely.
🔸 On-demand processing: Process events as they occur, avoiding batch processing.
🔹 Use Step Functions: Consider Step Functions for complex workflows.
🔸 Multiple AWS accounts: Manage quotas and isolation by using separate accounts.
#Lambda
Amazon
Handling billions of invocations – best practices from AWS Lambda | Amazon Web Services
This post is written by Anton Aleksandrov, Principal Solution Architect, AWS Serverless and Rajesh Kumar Pandey, Principal Engineer, AWS Lambda AWS Lambda is a highly scalable and resilient serverless compute service. With over 1.5 million monthly active…
👍10
MCP-server for Kubernetes:
https://github.com/Flux159/mcp-server-kubernetes
▫️ Connect to a Kubernetes cluster
▫️ List all pods
▫️ List all services
▫️ List all deployments
▫️ List all nodes
▫️ Create a pod
▫️ Delete a pod
▫️ Describe a pod
▫️ List all namespaces
▫️ Get logs from a pod for debugging (supports pods deployments jobs and label selectors)
▫️ Support Helm v3 for installing charts
▪️ Install charts with custom values
▪️ Uninstall releases
▪️ Upgrade existing releases
▪️ Support for namespaces
▪️ Support for version specification
▪️ Support for custom repositories
#MCP #Kubernetes
https://github.com/Flux159/mcp-server-kubernetes
▫️ Connect to a Kubernetes cluster
▫️ List all pods
▫️ List all services
▫️ List all deployments
▫️ List all nodes
▫️ Create a pod
▫️ Delete a pod
▫️ Describe a pod
▫️ List all namespaces
▫️ Get logs from a pod for debugging (supports pods deployments jobs and label selectors)
▫️ Support Helm v3 for installing charts
▪️ Install charts with custom values
▪️ Uninstall releases
▪️ Upgrade existing releases
▪️ Support for namespaces
▪️ Support for version specification
▪️ Support for custom repositories
#MCP #Kubernetes
GitHub
GitHub - Flux159/mcp-server-kubernetes: MCP Server for kubernetes management commands
MCP Server for kubernetes management commands. Contribute to Flux159/mcp-server-kubernetes development by creating an account on GitHub.
🤮14👍1
AWS Notes
Лидер в области безопасности Wiz отклонил предложение продаться Google и идёт на IPO. https://www.theverge.com/2024/7/23/24204198/google-wiz-acquisition-called-off-23-billion-cloud-cybersecurity Очень хорошо, таким гигантам для столь чувствительной ниши…
Google + Wiz
Что не было куплено год назад за 23 миллиарда $, теперь куплено за 32.
https://cloud.google.com/blog/products/identity-security/google-announces-agreement-acquire-wiz
Что ж. Се ля курити.
#security
Что не было куплено год назад за 23 миллиарда $, теперь куплено за 32.
https://cloud.google.com/blog/products/identity-security/google-announces-agreement-acquire-wiz
Что ж. Се ля курити.
#security
Google Cloud Blog
Google announces agreement to acquire Wiz | Google Cloud Blog
Google announces agreement to acquire Wiz. Learn how this acquisition will provide a unified security platform and protect against new threats.
🤩4👌2
Kafka 4.0:
➖ ZooKeeper 🪦
➕ Queues 🔥
https://kafka.apache.org/blog#apache_kafka_400_release_announcement
#Kafka
https://kafka.apache.org/blog#apache_kafka_400_release_announcement
#Kafka
Please open Telegram to view this post
VIEW IN TELEGRAM
Apache Kafka
Apache Kafka: A Distributed Streaming Platform.
🔥29😁5👍4👀2
Forwarded from Make. Build. Break. Reflect.
#aws #aurora #terraform #finops
Материал уровня senior
Мы умеем определять тип инстанса - по нагрузке на CPU/Memory и другим факторам.
Но насколько эффективно мы выбрали
Эффективно ли это спустя год или два?
А никто не знает, давайте разбираться.
- сперва пилим Dashboard к нашему существующему кластеру
https://gist.github.com/kruchkov-alexandr/d9335d7927e58d06557b994dc9f194de
- применяем, видим панель в CloudWatch
Сверху у нас панель чтения/записи хранилища, снизу размер базы данных (+снепшоты?).
Разделение нужно из-за разницы масштабов и удобства экспорта.
- выбираем период три месяца и кликаем на трех точках у обоих панелей и выбираем
- заходим в cost explorer и экспортируем дату за три месяца по кластеру
- заходим в вашу любимую AI(я спрашивал у клаудии, перплексити и грок3, все платные) и пилим промт нечто типа(можете писать свой, если мой вам кажется тупым):
"Help me decide if we should switch to Amazon Aurora I/O-Optimized. Use the attached billing screenshot/csv, three-month IOPS data from the CSV, and the IOPS/storage graphs to analyze our costs. Calculate our current I/O expenses, compare them to I/O-Optimized costs and check if our I/O costs exceed AWS’s 25% threshold for switching. Look at IOPS and storage trends, then recommend whether to switch, including specific cost figures. I’ve attached all files (billing, CSV, graphs).
based on this article
https://aws.amazon.com/blogs/database/estimate-cost-savings-for-the-amazon-aurora-i-o-optimized-feature-using-amazon-cloudwatch/"
- ждём ответа и все 4 нейронки мне выдали на 95% одинаковый подробнейший расчёт ответ. Вкратце "
- пишем менеджеру/боссу
I've analyzed our infrastructure costs over the last three months billing and IOPS data, to see if switching to Amazon Aurora I/O-Optimized makes sense. Right now, it's not cost-effective. Our I/O costs an average of $******* monthly (************ I/Os at $**** per million). Moving to I/O-Optimized would increase instance costs by ***%, from $******* to $******* - a $******* jump, which is $415.21 more than our current I/O expenses.
Our IOPS trends show peaks up to *** but no major growth, averaging ~** Write and ~**** Read IOPS hourly in February. Storage usage is growing at *** GB/month, but that doesn't impact the I/O-Optimized cost comparison. AWS suggests I/O-Optimized when I/O costs exceed **% of total Aurora spend, but ours ($******) are only **% of the $******* total, so we're below that threshold.
I recommend sticking with our standard configuration for now. We should keep monitoring I/O activity -if it exceeds **** I/Os monthly or I/O costs reach **% of our Aurora spend, we can revisit I/O-Optimized.
Прикладываем все файлы,скрины,расчёты.
- закрываем таску и трекаем время
Всё вышеописанное заняло у меня минут 15, а вот подготовительные работы(чтение про фичу, особенности, лимиты, как считать, написание борды, особенности биллинга и тп) почти половину дня.
* Если не верите ИИ, можете пересчитать вручную🐒
Дополнительные полезные ссылки(а вдруг вы мне не верите):
- анонс фичи
https://aws.amazon.com/about-aws/whats-new/2023/05/amazon-aurora-i-o-optimized/
- обзор менеджер уровня
https://aws.amazon.com/awstv/watch/b9bfc040ac5/
- пример расчётов (там руками считают, без ИИ)
https://aws.amazon.com/blogs/database/estimate-cost-savings-for-the-amazon-aurora-i-o-optimized-feature-using-amazon-cloudwatch/
Материал уровня senior
Мы умеем определять тип инстанса - по нагрузке на CPU/Memory и другим факторам.
Но насколько эффективно мы выбрали
Cluster storage configuration Авроры вашего проекта?Эффективно ли это спустя год или два?
А никто не знает, давайте разбираться.
- сперва пилим Dashboard к нашему существующему кластеру
https://gist.github.com/kruchkov-alexandr/d9335d7927e58d06557b994dc9f194de
- применяем, видим панель в CloudWatch
Сверху у нас панель чтения/записи хранилища, снизу размер базы данных (+снепшоты?).
Разделение нужно из-за разницы масштабов и удобства экспорта.
- выбираем период три месяца и кликаем на трех точках у обоих панелей и выбираем
Download as .csv и качаем оба файла- заходим в cost explorer и экспортируем дату за три месяца по кластеру
- заходим в вашу любимую AI(я спрашивал у клаудии, перплексити и грок3, все платные) и пилим промт нечто типа(можете писать свой, если мой вам кажется тупым):
"Help me decide if we should switch to Amazon Aurora I/O-Optimized. Use the attached billing screenshot/csv, three-month IOPS data from the CSV, and the IOPS/storage graphs to analyze our costs. Calculate our current I/O expenses, compare them to I/O-Optimized costs and check if our I/O costs exceed AWS’s 25% threshold for switching. Look at IOPS and storage trends, then recommend whether to switch, including specific cost figures. I’ve attached all files (billing, CSV, graphs).
based on this article
https://aws.amazon.com/blogs/database/estimate-cost-savings-for-the-amazon-aurora-i-o-optimized-feature-using-amazon-cloudwatch/"
- ждём ответа и все 4 нейронки мне выдали на 95% одинаковый подробнейший расчёт ответ. Вкратце "
Переходить пока рано".- пишем менеджеру/боссу
I've analyzed our infrastructure costs over the last three months billing and IOPS data, to see if switching to Amazon Aurora I/O-Optimized makes sense. Right now, it's not cost-effective. Our I/O costs an average of $******* monthly (************ I/Os at $**** per million). Moving to I/O-Optimized would increase instance costs by ***%, from $******* to $******* - a $******* jump, which is $415.21 more than our current I/O expenses.
Our IOPS trends show peaks up to *** but no major growth, averaging ~** Write and ~**** Read IOPS hourly in February. Storage usage is growing at *** GB/month, but that doesn't impact the I/O-Optimized cost comparison. AWS suggests I/O-Optimized when I/O costs exceed **% of total Aurora spend, but ours ($******) are only **% of the $******* total, so we're below that threshold.
I recommend sticking with our standard configuration for now. We should keep monitoring I/O activity -if it exceeds **** I/Os monthly or I/O costs reach **% of our Aurora spend, we can revisit I/O-Optimized.
Прикладываем все файлы,скрины,расчёты.
- закрываем таску и трекаем время
Всё вышеописанное заняло у меня минут 15, а вот подготовительные работы(чтение про фичу, особенности, лимиты, как считать, написание борды, особенности биллинга и тп) почти половину дня.
* Если не верите ИИ, можете пересчитать вручную
Дополнительные полезные ссылки(а вдруг вы мне не верите):
- анонс фичи
https://aws.amazon.com/about-aws/whats-new/2023/05/amazon-aurora-i-o-optimized/
- обзор менеджер уровня
https://aws.amazon.com/awstv/watch/b9bfc040ac5/
- пример расчётов (там руками считают, без ИИ)
https://aws.amazon.com/blogs/database/estimate-cost-savings-for-the-amazon-aurora-i-o-optimized-feature-using-amazon-cloudwatch/
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥21👍3🤔2❤1👏1
MCP with Bedrock:
- Allows AI models to access information outside their built-in knowledge.
- Help build tools that enable AI models to perform actions (such as visiting websites or checking the weather).
- Establish communication through a standardized protocol between the user, the AI model and external tools.
https://community.aws/content/2uFvyCPQt7KcMxD9ldsJyjZM1Wp/model-context-protocol-mcp-and-amazon-bedrock
For example, if you ask "get me a summary of the blog post at this URL", the system will:
- Process your request
- Recognize it needs to use a tool to visit the webpage
- Fetch the content from the URL
- Return the information to the model
- Generate a summary based on the fetched content
#MCP #Bedrock
- Allows AI models to access information outside their built-in knowledge.
- Help build tools that enable AI models to perform actions (such as visiting websites or checking the weather).
- Establish communication through a standardized protocol between the user, the AI model and external tools.
https://community.aws/content/2uFvyCPQt7KcMxD9ldsJyjZM1Wp/model-context-protocol-mcp-and-amazon-bedrock
For example, if you ask "get me a summary of the blog post at this URL", the system will:
- Process your request
- Recognize it needs to use a tool to visit the webpage
- Fetch the content from the URL
- Return the information to the model
- Generate a summary based on the fetched content
#MCP #Bedrock
👍5🔥3
Всех с пятницей .
🔥 Выпущена версия 0.19.1 платформы для изучения SRE! 🔥
GitHub
📌 Что нового:
- обновлён docker image for runner(viktoruj/runner)
- добавлено api к ping-pong серверу
- добавлена эмуляция падения программы в ping-pong
- добавлена эмуляция медленного ответа в ping-pong
- добавлен просмотр текущих значений переменных в ping-pong
- добавлено изменение переменных в запущенном ping-pong через api
- добавлено получение текущих параметров системы через api в ping-pong
- добавлены скомпилированные bin ping-pong для различных операционных систем
документация ping-pong сервера
🧪 Доступные пробные экзамены:
CKA
CKAD
CKS
KCNA
KCSA
LFCS
Скрипты и видео с решениями экзаменов:
Да пребудет с вами сила!
🔥 Выпущена версия 0.19.1 платформы для изучения SRE! 🔥
GitHub
📌 Что нового:
- обновлён docker image for runner(viktoruj/runner)
- добавлено api к ping-pong серверу
- добавлена эмуляция падения программы в ping-pong
- добавлена эмуляция медленного ответа в ping-pong
- добавлен просмотр текущих значений переменных в ping-pong
- добавлено изменение переменных в запущенном ping-pong через api
- добавлено получение текущих параметров системы через api в ping-pong
- добавлены скомпилированные bin ping-pong для различных операционных систем
документация ping-pong сервера
🧪 Доступные пробные экзамены:
CKA
CKAD
CKS
KCNA
KCSA
LFCS
Скрипты и видео с решениями экзаменов:
Да пребудет с вами сила!
GitHub
GitHub - ViktorUJ/cks: Open-source Platform for learning kubernetes and aws eks and preparation for for Certified Kubernetes…
Open-source Platform for learning kubernetes and aws eks and preparation for for Certified Kubernetes exams (CKA ,CKS , CKAD) - GitHub - ViktorUJ/cks: Open-source Platform for learning kubern...
🔥33
Forwarded from valmont2k
дарю лайфхак для профи: на всех митигах можно тренировать слепую десятипальцевую печать, так даже полезнее.
всего несколько митингов и ты мастер
всего несколько митингов и ты мастер
👍13😎6
IngressNightmare — сразу несколько уязвимостей NGINX Controller for Kubernetes доступом к секретам всего и везде без авторизации:
https://www.wiz.io/blog/ingress-nginx-kubernetes-vulnerabilities
◽️ Кто пострадал — обладатели NGINX Controller версий до 1.12.1/1.11.5. Для устранения нужно срочно обновиться на последнюю версию.
◽️ Кто не пострадал — пользователи EKS:
EKS does not provide or install the ingress-nginx controller and is not affected by these issues.
Официальный отчёт о уязвимости Kubernetes:
https://kubernetes.io/blog/2025/03/24/ingress-nginx-cve-2025-1974/
#Kubernetes #security
https://www.wiz.io/blog/ingress-nginx-kubernetes-vulnerabilities
◽️ Кто пострадал — обладатели NGINX Controller версий до 1.12.1/1.11.5. Для устранения нужно срочно обновиться на последнюю версию.
◽️ Кто не пострадал — пользователи EKS:
EKS does not provide or install the ingress-nginx controller and is not affected by these issues.
Официальный отчёт о уязвимости Kubernetes:
https://kubernetes.io/blog/2025/03/24/ingress-nginx-cve-2025-1974/
#Kubernetes #security
wiz.io
CVE-2025-1974: The IngressNightmare in Kubernetes | Wiz Blog
Wiz Research uncovered RCE vulnerabilities (CVE-2025-1097, 1098, 24514, 1974) in Ingress NGINX for Kubernetes allowing cluster-wide secret access.
😁5👍2
We are excited to announce that on Friday, March 28, we are hosting an AWS Workshop: Amazon RDS for PostgreSQL, open to everyone!
With Amazon RDS, you can deploy scalable PostgreSQL instances in minutes with cost-efficient and resizable hardware capacity. Amazon RDS manages complex administrative tasks such as PostgreSQL software installation and upgrades, storage management, replication for high availability and read throughput, and backups for disaster recovery.
The workshop will be led by Sergey Pagin, an experienced AWS specialist and Lead Systems Engineer at EPAM Systems.
We will provide a dedicated training account for each participant who registers via the Google Form [https://forms.gle/L6Fp8md2XGwsK2DG8]. However, you can also use your personal AWS account.
⚠️ IMPORTANT: Please complete the registration form [https://forms.gle/L6Fp8md2XGwsK2DG8] by 21:00 UTC+1 on March 27.
🔗 Find out more details and register here:
https://wearecommunity.io/events/workshop-amazon-rds-for-postgresql
With Amazon RDS, you can deploy scalable PostgreSQL instances in minutes with cost-efficient and resizable hardware capacity. Amazon RDS manages complex administrative tasks such as PostgreSQL software installation and upgrades, storage management, replication for high availability and read throughput, and backups for disaster recovery.
The workshop will be led by Sergey Pagin, an experienced AWS specialist and Lead Systems Engineer at EPAM Systems.
We will provide a dedicated training account for each participant who registers via the Google Form [https://forms.gle/L6Fp8md2XGwsK2DG8]. However, you can also use your personal AWS account.
⚠️ IMPORTANT: Please complete the registration form [https://forms.gle/L6Fp8md2XGwsK2DG8] by 21:00 UTC+1 on March 27.
🔗 Find out more details and register here:
https://wearecommunity.io/events/workshop-amazon-rds-for-postgresql
👍9❤1💩1
Пишу тут про AI. Пишу в первую очередь для своих, во вторую для себя, чтобы в одном месте хранить материалы по теме.
Telegram
AIзбука
Ликбез (ликвидация безграмотности) по теме AI (Artificial Intelligence или ИИ — искусственный интеллект).
Предназначен для всех, но особенно для инженеров, потому что они особо тяжёлый случай.
Ссылка на канал: https://xn--r1a.website/AIzbuka
Контакт: @apple_rom
Предназначен для всех, но особенно для инженеров, потому что они особо тяжёлый случай.
Ссылка на канал: https://xn--r1a.website/AIzbuka
Контакт: @apple_rom
🔥8👍7
EKS Terraform demo
https://github.com/setheliot/eks_demo/
Deploys:
▪️ EKS cluster using EC2 nodes
▪️ DynamoDB table
▪️ EBS volume used as attached storage for the Kubernetes cluster (a PersistentVolume)
▪️ Demo "guestbook" application, deployed via containers
▪️ ALB to access the app
#EKS #Terraform
https://github.com/setheliot/eks_demo/
Deploys:
▪️ EKS cluster using EC2 nodes
▪️ DynamoDB table
▪️ EBS volume used as attached storage for the Kubernetes cluster (a PersistentVolume)
▪️ Demo "guestbook" application, deployed via containers
▪️ ALB to access the app
#EKS #Terraform
👍5💊5
Forwarded from AWS Weekly (Max Skutin)
▪️ Amplify Hosting WAF Protection | GA
▪️ Amplify
▫️ samples to Deploy Storage Browser for S3
▫️ Shared Keychain support for Swift
▪️ Application Recovery Controller FIS recovery action for zonal autoshift
▪️ Bedrock Custom Model Import introduces real-time cost transparency
▪️ Bedrock Guardrails industry-leading image content filters | GA
▪️ Bedrock Knowledge Bases Opensearch Managed Cluster for vector storage
▪️ CloudFormation targeted resource scans in the IaC generator
▪️ CodeBuild custom cache keys for S3 caching
▪️ Connected Mobility Solution new features
▪️ Database Insights customization of its metrics dashboard
▪️ DataZone metadata rules for publishing
▪️ Dedicated Local Zones gp3 and io1 ebs volumes
▪️ DMS Schema Conversion IBM Db2 for z/OS to RDS for Db2 conversion
▪️ DynamoDB percentile statistics for request latency
▪️ DynamoDB Streams PrivateLink support
▪️ EC2 more bandwidth and jumbo frames to select destinations
▪️ EKS enforces upgrade insights checks as part of cluster upgrades
▪️ Elemental MediaConnect NDI® outputs
▪️ EventBridge Scheduler PrivateLink support
▪️ GameLift Servers next-gen EC2 instances
▪️ IAM dual-stack (IPv4 and IPv6) environments
▪️ Keyspaces Multi-Region support for User Defined Types (UDTs)
▪️ Lambda Ruby 3.4
▪️ Marketplace new seller experiences for ML products
▪️ Network Firewall pass action rule alerts and JA4 filtering
▪️ Network Manager support PrivateLink and IPv6
▪️ Open Source Corretto 24 | GA
▪️ Parallel Computing Service Terraform support
▪️ Polly New Korean voice
▪️ Q Business upgrades for Slack and Teams Integrations
▪️ Q in QuickSight Scenarios capability | GA
▪️ RDS for MySQL Innovation Release 9.2 in Preview Environment
▪️ RDS for SQL Server linked servers to Teradata databases
▪️ Route 53 Profiles IPv6 Service Endpoints
▪️ SageMaker HyperPod multi-head node support in Slurm
▪️ SageMaker metadata rules to enforce standards and improve data governance
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥2