Contact Us:
CheapDeveloper » Webmaster » Articles » AWS re:Invent 2021: Keynotes

AWS re:Invent 2021: Keynotes

02 December 2021, Thursday By Priyanka Boruah

2021 is a jubilee year for Amazon Web Services for many reasons: 15 years since the launch of the first services, and re: Invent itself has been held for the 10th time! The second day of the training conference has come to an end, and in this article we have collected the most important things from the past days.

Traditionally, the best solution architects of Amazon Web Services will discuss all significant announcements of re:Invent 2021 on their youtube channel. Register, connect and ask your questions!

AWS re: Invent 2021

Safety always comes first

Security is one of the top priorities for AWS, so I want to start with this topic.

Amazon CodeGuru expands its functionality, helping not only automate code reviews, identify potential bugs, but also find the "secrets" stored in the code. Amazon CodeGuru Reviewer Secrets Detector is a tool for automatic detection of passwords, API keys, SSH keys and tokens. The new detector uses ML to identify secrets and is part of the code review process.

Amazon CodeGuru

The new functionality is part of the CodeGuru Reviewer (so no additional cost) and supports most of the API providers: AWS, Atlassian, Datadog, Databricks, GitHub, Hubspot, Mailchimp, Salesforce, SendGrid, Shopify, Slack, Stripe, Tableau, Telegram.

Another vulnerability detection tool that has expanded functionality is Amazon Inspector. Assessments can now run continuously, and the agent inspector functionality has been moved to the AWS System Manager agent. New resources - Amazon EC2 instances and Amazon Elastic Container Registry repositories - will be automatically added to the inspector. Prediction of risks and vulnerabilities will be more accurate due to the comparison of meta information from CVE (Common Vulnerability and Exposures) and the implementation specifics of the environment. Also added support for integrations with Amazon EvenBrige and AWS Security Hub.

Amazon System Hub

Compute Resources - New Instance Types and EBS

Amazon New Instance Types and EBS

New Amazon EC2 Im4gn and Is4gen Instance Types Powered by AWS Graviton2 Processors

Im4gn and Is4gen - New instance types optimized for storage-focused workloads with up to 30TB of local next-generation NVMe storage powered by AWS Nitro SSDs. It was specifically designed by AWS to increase performance in data warehousing and I/O intensive applications such as SQL/NoSQL databases, search engines, distributed file systems, and data analytics. These instances maximize the number of processed transactions per second (TPS) for high-intensity I/O workloads. For example, such as relational databases (MySQL, MariaDB, PostgreSQL), and NoSQL databases (KeyDB, ScyllaDB, Cassandra). 

Amazon EC2 i4i

Amazon EC2 I4i instances are powered by 3rd generation Intel Xeon Scalable processors and provide up to 30 TB of local NVMe storage powered by the next generation AWS Nitro SSD. In addition to high performance and low data access latency, Nitro SSDs also feature always-on disk encryption. 

i4i instances provide up to 30% better price-performance ratio, 60% lower disk I/O latency, and 75% less variability in storage access latency compared to i3 instances.

These instances are perfect for databases such as MySQL, Oracle DB, Microsoft SQL Server, as well as NoSQL databases: MongoDB, Couchbase, Aerospike and Redis, where low latency access to local NVMe storage is required to guarantee SLA execution of applications.

G5g instances on Graviton2 and NVIDIA T4G Tensor Core GPUs

In addition to the Graviton2 processors, the G5g hosts an NVIDIA T4G Tensor Core GPU to deliver the best price-performance ratio for streaming Android games with network bandwidth up to 25 Gbps and 19 Gbps bandwidth up to EBS.

These instances offer up to 30% less streaming per hour for Android games than x86 GPU instances. G5g instances are also ideal for machine learning developers looking for cost-effective inference or CPU-sensitive ML models and NVIDIA AI libraries. 

Another use case is graphics rendering using NVIDIA libraries, or application rendering based on industry standard APIs such as OpenGL or Vulkan. 

At the same time, if you do not need support for NVIDIA libraries, you can use Inf1 instances, which provide up to 70% lower inference price compared to G4dn instances.

Amazon EC2 M6a instances powered by 3rd Gen AMD EPYC processors

The processors in these instances run at frequencies up to 3.6 GHz and offer up to 35% better value for money than the previous generation of M5a instances.

Compared to M5a, the new M6a instance type has the following differences:

  • The maximum instance size has been increased to 48xlarge from 192 vCPUs and 768 GiB of memory, allowing you to run more workloads within a single instance. M6a has Elastic Fabric Adapter (EFA) support for workloads that need low network latency and a highly scalable communication channel between multiple nodes - this can be HPC or video processing. 

  • Up to 35% better performance per vCPU than comparable M5a instances, up to 50 Gbps of network bandwidth, up to 40 Gbps of bandwidth up to Amazon EBS, nearly double that of M5a instances.

  • Always-on memory encryption engine and support for new AVX2 instructions to accelerate encryption and decryption algorithms.

Graviton 3

Graviton 3

AWS Graviton3 processors are the next generation in the AWS Graviton processor family. They have up to 25% better computational performance, up to 2x faster floating point performance, and up to 2x faster in cryptographic workloads compared to AWS Graviton2 processors. AWS Graviton3 processors are up to 3x faster in ML workloads and include bfloat16 support. They also support DDR5 memory type, which gives 50% more memory bandwidth compared to DDR4. 

Amazon EC2 C7g AWS Graviton3 Processor Instances

Amazon EC2 C7g instances powered by the next generation of AWS Graviton3 processors provide the best price-performance ratio in Amazon EC2 for compute-intensive workloads. C7g instances are ideal for HPC workloads, batch processing, electronics design automation (EDA), gaming, video processing, scientific modeling, distributed analytics, CPU-based machine learning, and ad impressions. They are up to 25% faster than 6th Gen C6g instances powered by Graviton2. C7g instances are the first in the cloud to support DDR5 memory, delivering 50% more memory bandwidth over DDR4 and enabling faster memory access. The C7g also supports the Elastic Fabric Adapter (EFA) for HPC applications that require a high level of inter-node connectivity.

Amazon EC2 Trn1 AWS Trainium Instances (Preview)

Amazon EC2 Trn1 AWS Trainium Instances

Trn1 instances provide the best value for money to train deep learning models in the cloud for use cases such as NLP, object detection, image recognition, recommendation algorithms, smart search, and more. They support up to 16 Trainium accelerators, up to 800 Gbps EFA network bandwidth (double that of GPU-powered instances), and ultra-fast in-instance connectivity for the fastest ML trainings on Amazon EC2.

AWS Outposts Servers two new form factors

AWS Outposts provides the computing power to host on your local sites, monitored and managed by AWS, and controlled by AWS APIs you are familiar with. You may have already heard of AWS Outposts - in a full-size 42U rack form factor.

AWS is launching three new AWS Outposts servers today, powered by the AWS Nitro System and with a choice of x86 or Arm/Graviton2 processors:

AWS Outposts servers

Within each of your Outposts servers, you can run any number of instances of different sizes, but in total, they do not go beyond the available computing resources and available storage. You can create Amazon Elastic Container Service (Amazon ECS) clusters and there are plans to add support for Amazon EKS clusters in the near future. The code you run on these on-premises resources can use all available AWS cloud services.

Each Outposts server connects to the cloud either through public Internet channels or private dedicated channels powered by AWS Direct Connect. Also, each Outpost server supports a Local Area Network Interface (LNI), which provides a Level 2 presence on your LAN for AWS service endpoints.


AWS Nitro SSD - High Performance Drives for I/O Intensive Applications

The first generation of these disks was used in io2 Block Express EBS volumes, giving you disks with high IOPS, high throughput, and a maximum volume size of 64 TiB.  

Second Generation AWS Nitro SSDs are engineered to avoid spikes in access latency and provide excellent I/O performance for real-world workloads. On tests, instances using the new AWS Nitro SSDs such as Im4gn and Is4gen showed 75% less variability in access latency than i3 instances, resulting in more consistent performance.

New Archive Storage Class for Amazon EBS Snapshots

With Amazon EBS Snapshots Archive, a new storage class for EBS Snapshots, you can save up to 75% on storage costs for Snapshots that you want to store for more than 90 days and don't plan to access them often.  

EBS Snapshots Archive stores full snapshots of your disk at a storage cost of $0.0125/GB-month. In this storage class, the minimum storage period is 90 days. Snapshot requests in this storage class have a cost of $0.03/GB for the amount of data transferred. (*price for the example of the region us-east-1).

Recover accidentally deleted EBS snapshots using the Recycle Bin

Up to this point, if you accidentally deleted your EBS snapshot, you had to rollback to the previous saved snapshot, which increased the RPO of your workloads. With the new functionality of the Recycle Bin, you can set the storage period for deleted snapshots and restore them before this time expires.

Containers and Kuberenetes


One of the interesting releases at re: Invent 2021 was Karpenter, an open source tool for auto-scaling Kubernetes clusters. It came about in response to feedback from AWS customers. In the past, EKS cluster autoscaling was accomplished by using Amazon EC2 Auto Scaling groups and Kubernetes Cluster Autoscaler together. Many customers complained that getting the correct scaling up was not a trivial task and lacked the capabilities of the k8s Cluster Autoscaler.

When Karpenter is running on your cluster, it estimates the total resource request from pods that have run out of space in the current infrastructure and makes decisions about starting new nodes and stopping them to speed up pod launches and reduce infrastructure costs. At the same time, Karpenter himself selects the required size and number of virtual machines in order to use computing resources most efficiently. It interacts directly with the virtual machine service of a cloud provider such as Amazon EC2.

In AWS, it can run on all types of computing resources - on virtual machines that you yourself configured and added to the cluster (self-managed node groups), on virtual machines under AWS (managed node groups), and on AWS Fargate. You can start your acquaintance with Karpenter with its documentation and demo video from AWS Container Day.

Caching Public Images in Amazon ECR

Now you can connect your private ECR repositories to cache images from public repositories (without authorization). You can set rules for each image separately, and Amazon ECR will automatically synchronize the versions of images in the source repository by itself, updating its cache once a day. Read more in the documentation.


Among the other announcements, the new analytics services have something in common, namely: they announced the possibility of using 4 services in serverless mode.

Amazon Kinesis Data Streams On-Demand Mode

Amazon Kinesis Data Streams is a serverless service for receiving and processing streaming data. The only difficult task when starting a data stream in Kinesis was to predict the volume of an incoming stream, since when creating a Kinesis Data Stream, you had to specify the number of shards - that is, pre-set the maximum capacity and bandwidth of the stream. A new configuration mode has now been added - Kinesis Data Streams On-Demand. This new mode of operation of the stream automatically scales its capacity in accordance with the changing volume of data. You pay for each gigabyte of data written, read and stored in the stream per unit of time.

Kinesis Data Stream in this mode is capable of serving data writes and reads throughput of several gigabytes per minute without capacity planning. You can also convert your existing data stream to capacity-on-demand mode with one click in the AWS Console. This mode provides the same high availability and reliability that Kinesis Data Streams already offers. All features like AWS PrivateLink, Amazon Virtual Private Cloud, Enhanced Fan-Out, and Extended Retention work unchanged. When you switch existing streams to on-demand work, you can continue to use existing applications to write and read data without any code changes or downtime. In capacity-on-demand mode, all existing Kinesis Data Streams integrations with other AWS services,

Amazon MSK Serverless (preview) 

A public preview of the Amazon MSK Serverless service has been announced today. This is a new type of Amazon MSK cluster that makes it easier for developers to run Apache Kafka without having to manage its capacity. MSK Serverless automatically provision and scales compute and storage resources, and supports bandwidth-based billing, so you can use Apache Kafka on demand. You pay an hourly rate per cluster and an hourly rate for each partition you create. Plus, you pay for a gigabyte of bandwidth and storage. 

Getting started with Apache Kafka is now even easier. In the AWS Management Console, you can set up secure, highly available clusters that automatically scale as the i/o operations of your application change. MSK serverless is fully compatible with Apache Kafka, so you can run existing applications without any code changes, or create new applications using familiar tools and APIs. MSK Serverless supports out-of-the-box integrations with other AWS services such as AWS PrivateLink, access control using AWS Identity and Access Management (IAM), and data schema storage using the AWS Glue Schema Registry. More details.

Amazon Redshift Serverless (preview)

Amazon Redshift now provides a serverless preview for running and scaling analytics without having to create and manage data warehouse clusters. With Amazon Redshift Serverless, analysts, data engineers, and developers can use Amazon Redshift to get insights from information in seconds. Amazon Redshift Serverless automatically provision and intelligently scales DWH compute power to deliver best-in-class performance for all of your analytics. You pay for the computing resources used to process data and execute queries on a per second basis. Of course, you can use this new Redshift mode of operation without making any changes to existing ETL tasks, analytics and BI applications.

Right in the AWS Management Console, you can start querying and analyzing data with Amazon Redshift Serverless. There is no need to manually select node types, number of nodes, scaling and other settings. You can use the ready-made examples and datasets along with the sample queries to get started immediately with the capabilities of Redshift. You can create databases, schemas, tables and load your data from Amazon S3, access data from shared resources (via Amazon Redshift data sharing), or restore a previously created cluster snapshot. Amazon Redshift Serverless also allows you to directly analyze data in Amazon S3 data lakes as well as operational databases such as Amazon Aurora and Amazon RDS. 

The Amazon Redshift Serverless preview is available in the following regions: US East (N. Virginia), US West (N. California), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Asia Pacific (Tokyo). Look at the service pageblog post, and the section of the documentation to start working with the Redshift Serverless.

Amazon EMR Serverless (preview)

Introduced a preview of Amazon EMR Serverless, a new serverless option in Amazon EMR that enables data engineers to easily and efficiently perform petabyte-scale data analytics in the cloud. Amazon EMR is a cloud-based big data platform used by customers to perform large-scale distributed data processing tasks, interactive SQL queries, and machine learning tasks using open-source analytics platforms such as Apache SparkApache Hive, and Presto.

With EMR Serverless, analysts can run applications built using these frameworks without having to tune, optimize, or manage clusters. EMR Serverless automatically provision and scales the compute and memory resources required to run the analytics task, and customers only pay for the resources they use.

With EMR Serverless, you simply select the framework you want to use for your application and its version and submit tasks using the API - the EMR Studio interactive environment or JDBC/ODBC clients. EMR Serverless automatically calculates and allocates the compute and memory resources required to process requests and scales them up and down at different processing stages based on changing requirements. For example, a Spark task may require two worker nodes for the first 5 minutes, ten workers for the next 10 minutes, and five for the last 20 minutes, depending on the type of data processing. EMR Serverless automatically provision and configures resources as needed, so you don't have to think ahead of time about changing data volumes over time. And since you only pay for the resources you use, EMR Serverless proves to be beneficial for performing petabyte-scale data processing and analytics. You can check the status of running tasks, view the history of tasks and debug them using EMR Studio.

A preview version of Amazon EMR Serverless is available in US-East (N Virginia). Go here to subscribe to the preview, read the blog and consult the documentation for more information.

What's New in AWS Lake Formation

A data lake can help you piece together disparate data into a centralized repository. It can store structured and unstructured data. However, setting up and managing data lakes involves many manual, complex, and time-consuming tasks. AWS Lake Formation makes it easy to set up a secure data lake in just a few days.

Today we are pleased to announce the launch of some new AWS Lake Formation features that further simplify data loading, optimizing data storage, and managing data lake access.

Governed Tables are a new type of Amazon S3 table that makes it easier and more reliable to receive and manage data of any size. Managed tables support ACID transactions, which allow multiple users to insert and delete data in multiple managed tables at the same time. ACID transactions also allow you to run queries that return consistent and up-to-date data. Changes are not committed in case of errors in your ETL tasks or during data refresh.

Storage Optimization with Automatic Compaction - When enabled, AWS Lake Formation automatically compresses small S3 objects in managed tables into larger objects to optimize access to them through analytics tools such as Amazon Athena and Amazon Redshift Spectrum. By using automatic compression, you don't have to implement special ETL tasks yourself that read, combine, and compress data into new files and then replace the original files.

Granular Access Control with Row and Cell-Level Security - You can customize access to specific rows and columns of your data in query results and AWS Glue ETL tasks based on who is running inquiry.

There is no need to create and constantly update projections and subsets of your data for different roles and access levels. This feature works for both managed and traditional S3 tables. More information - in the blog, and as usual - in the documentation.

Cloud Watch - expands functionality 

There are two significant updates to Amazon CloudWatch. Both focus on monitoring the application, but first things first. Let's start with Amazon CloudWatch Evidently: it will be useful if you need to implement A/B testing, or feature a flag approach (Feature Toggles). In both situations, the task is the same - to add a new functionality, and in case of unexpected user behavior (for example, the idea has not been tested on users) or the system, you can make a quick return to the previous state. Using Amazon CloudWatch Evidently, you can manage and track new feature launches:

A detailed tutorial with an example of A/B testing and creating a flag for a feature can be studied in the article.

The second Amazon Cloud Watch update focuses on end-user monitoring. Real-User Monitoring (RUM) will help you collect metrics to analyze the user experience of your application. By adding a jаvascript snippet that is generated during the Real-User Monitoring setup process, and adding it to your application page, you can collect performance telemetry, jаvascript errors, HTTP errors, track the user path and monitor other data from the client side. 

Connect to the stream:

And these are not all the announcements made during these two days, but we are not saying goodbye - there are still several days of the conference ahead! We will continue to share with you new products in text format and more. We remind you that in the coming days there will be 3 live broadcasts with discussions of the most interesting announcements.


Read also:

AWS re:Invent 2021 Keynotes - AI/ML
03 December 2021, Friday
AWS re:Invent 2021 Keynotes - AI/ML
What is a dashboard
25 November 2021, Thursday
What is a dashboard
What is Jira Software and How To Work With It
24 November 2021, Wednesday
What is Jira Software and How To Work With It
Add a comment
Comments (0)