Contact Us:
07002007332
CheapDeveloper
CheapDeveloper » Webmaster » Articles » AWS re:Invent 2021 Keynotes - AI/ML

AWS re:Invent 2021 Keynotes - AI/ML

03 December 2021, Friday By Priyanka Boruah
171
0

We continue to share the latest announcements with AWS re:Invent 2021. We can safely call the third day of re:Invent 2021 a day of machine learning and databases, we will analyze an impressive list of updates and new releases in this article. If you missed the first part of the announcements, then yesterday the solution architects from AWS on the stream discussed all the events of the first days - follow the link to see the recording.

AWS re-Invent 2021 - AI-ML

Amazon SageMaker Studio Lab (Preview)

One of AWS's missions is to make machine learning more accessible. Existing machine learning environments are often too complex for beginners or too limited to support modern machine learning experiments. Newbies want to get started quickly and not worry about deploying infrastructure, setting up services, or running out of budget. There is another barrier for many people: the need to provide billing and credit card information when registering.

What if you had a predictable and controlled Jupyter notebook startup environment in which you couldn't accidentally get a big bill? The one that does not require credit card information at all during registration?

Today AWS announces the launch of a pre-public version of Amazon SageMaker Studio Lab - a free service for developers, scientists and data scientists, where they can experiment and study machine learning, and which does not require knowledge about setting up clouds or a credit card.

With Amazon SageMaker Studio Lab, users can experiment with data and machine learning without having to set up or run any infrastructure. It is based on the open source JupyterLab web application, giving users a completely open environment with the ability to use any framework such as Pytorch, TensorFlow, MxNet, or Hugging Face, and libraries such as SciKitLearn, NumPy, and Pandas. Studio Lab has auto-save, so user sessions are saved so they can resume where they left off the next time they log in.

Amazon SageMaker Studio Lab

To start working with Studio Lab, follow this link and request an account. You will only need your email address to register. When your request is approved, you will receive an email with a link to the Studio Lab account registration page. Now you can create your account with a verified email address, set a password and a username. This account is independent of the AWS account and does not require payment information. Soon after, you will be able to start learning machine learning and experimenting with Jupiter laptops. Examples such as AWS Machine Learning University, Dive into Deep Learning, and Hugging Face Notebooks are also available there.

Studio Lab is easy to set up. In fact, the only thing you need to do is choose whether you need a CPU or GPU instance for your project.

Amazon Studio Lab

You can choose between 12 CPU hours or 4 GPU hours per session, with an unlimited number of sessions available to you. In addition, you get at least 15 GB of permanent storage per project. At the end of the session, Studio Lab will make a save. This allows you to pick up where you left off.

Studio Lab comes with a basic Python image that you can get started with. There are only a few libraries preinstalled in the image to save available space for the frameworks and libraries you need.

Amazon Studio Lab Python image

Studio Lab is tightly integrated with GitHub and offers full Git command line support. This makes it easy to clone, copy and save projects. Alternatively, you can add the Open in Studio Lab icon to the Readme.md file, or save your notebooks to the public GitHub repository to share your work with others.

You can request a free Amazon SageMaker Studio Lab account today. The number of new account registrations will be limited to ensure a high quality of service for all users. Sample notebooks can be found in the Studio Lab GitHub repository.

Amazon SageMaker Canvas

Amazon SageMaker Canvas is a new feature in Amazon SageMaker that enables business analysts to create accurate machine learning models and generate predictions using a graphical interface without having to write code.

Amazon SageMaker Canvas provides a user interface for quickly connecting and accessing data from a variety of sources, and for preparing data for building machine learning models. SageMaker Canvas uses AutoML technology to automatically train and build models based on your data. This allows SageMaker Canvas to determine the best model based on this data so that you can create single or group predictions. SageMaker Canvas is integrated with SageMaker Studio, making it easy for business analysts to share models with data scientists.

Amazon SageMaker Canvas

Amazon SageMaker Canvas is already available in the US AWS Regions (Ohio, N. Virginia, and Oregon) and Europe (Frankfurt and Ireland). You can read more in the official blog, and to get started, you can go to the product page.

Amazon SageMaker Serverless Inference

Amazon SageMaker Serverless Inference (Preview) is a new serverless way of deploying ML models. With Serverless Inference, you no longer have to think about virtual machines. Amazon SageMaker will deploy the model itself and auto-scale. You only pay for the actual running time of the models and the amount of processed data, not the downtime. This is ideal for models that don't always work or have unpredictable usage patterns. 

Amazon SageMaker Serverless Inference

For example, you are building a chat bot that helps company employees get services from the accounting department. Such a chat bot will be most actively used several days a month - on salary days. Amazon SageMaker Serverless Inference would be the perfect way to deploy an ML model for such a chatbot and not pay for time when the model is not in use. 

With the introduction of SageMaker Serverless Inference, SageMaker now provides four ways to deploy ML models. In addition to Serverless Inference, this is SageMaker Real-Time Inference for models requiring low (millisecond) latency. SageMaker Batch Transform for batch processing large amounts of data, and SageMaker Asynchronous Inference for models requiring asynchronous processing due to long processing times or large input sizes. 

A Jupyter notebook is available on the SageMaker sample repository on the GitHub repository to show you how to work with SageMaker Serverless Inference from start to finish. 

Serverless Inference is available in US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), Asia Pacific (Tokyo), and Asia Pacific (Sydney).

Amazon SageMaker Studio integration with Amazon EMR - Hadoop and Spark 

Amazon SageMaker Studio can now work directly with Amazon EMR clusters. You can create, delete, start and stop Amazon EMR clusters right from SageMaker Studio, and use SparkUI to monitor tasks running on the cluster. EMR cluster templates are available, optimized for different tasks. 

Amazon SageMaker Studio integration with Amazon EMR

You no longer have to leave SageMaker Studio to configure an EMR cluster and think about how to connect to it. And monitoring Spark tasks does not require configuring proxies and tunnels. Now you can do it in one click right from your laptop.

SageMaker Studio to configure an EMR cluster

You can read more about working with Amazon EMR and Spark in SageMaker Studio in the following articles:

Amazon SageMaker Training Compiler

Amazon SageMaker Training Compiler is a new SageMaker feature that can help accelerate the training of deep learning models (neural networks) by up to 50%.

Training neural networks can be very time consuming. For example, training the popular NLP model RoBERTa on one video card lasts 25,000 hours. Only professionals with extensive experience can now optimize training time, which prevents the introduction and use of machine learning. Data Scientists usually write Python code using TensorFlow or PyTorch or other frameworks. These frameworks convert Python code into mathematical functions that run on video cards. Usually, a standard code is used that is not sharpened for a specific ML model being trained at the moment. SageMaker Training Compiler generates video card code tailored for training your specific model. This makes learning faster and requires less memory. 

Hugging Face GPT-2 model using SageMaker Training Compiler takes only 90 minutes instead of 3 hours. However, enabling the SageMaker Training Compiler requires only two lines of Python code to be added.

Amazon SageMaker Training Compiler

Not every model can be optimized. SageMaker Training Compiler now supports Hugging Face Transformers in TensorFlow and PyTorch.

Jupyter notebooks are available showing how to work with the SageMaker Training Compiler, as well as documentation. The Training Compiler is already available in EU (Ireland), US East (N. Virginia), US East (Ohio) and US West (Oregon). 

Amazon SageMaker Inference Recommender

Amazon SageMaker Inference Recommender

Amazon SageMaker Inference Recommender is a new feature in Amazon SageMaker Studio that allows you to load test ML models and optimize their resource allocation. 

Prior to the advent of SageMaker Inference Recommender MLOps, engineers found it difficult to select the optimal EC2 instances in terms of price/utilization ratio, taking into account the peculiarities of each particular ML model. This was done by trial and error. 

Now you can quickly load test, evaluate performance, throughput, and latency, and deploy the ML model to the optimal instance type. This allows MLOps engineers to be confident that their ML models behave predictably under workload. 

More information is available in the documentation. An example of working with an Inference Recommender from code is available in the Jupyter notebook

Amazon RDS Custom for SQL Server and Oracle

On October 26, 2021, AWS launched Amazon RDS Custom for Oracle, a managed database service for applications requiring configuration of the operating system and the database management environment itself. RDS Custom allows you to access and configure the database server host and operating system, for example, by applying special patches and changing the settings of the database software itself to support third-party applications that require privileged access.

Today announced the general availability of Amazon RDS Custom for SQL Server to support applications that need a specific configuration, and to support third-party applications that require special configuration in enterprise systems, e-commerce and content management systems, such as Microsoft SharePoint.

With RDS Custom for SQL Server, you can enable features that require elevated privileges such as the SQL Common Language Runtime (CLR), install specific drivers to enable heterogeneous linked servers, or have more than 100 databases per instance.

Because it is a managed service, RDS Custom for SQL Server allows you to focus on the core business that matters to your business. Having automated backups and other operational tasks gives you peace of mind knowing that your data is safe and ready for recovery if needed.

Amazon RDS Custom for SQL Server is already available in US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), EU (Frankfurt), EU (Ireland), and EU (Stockholm).

For more information, see  the product page and Amazon RDS Custom documentation. Leave feedback on the AWS Forum for Amazon RDS, or through your regular AWS support contacts.

Amazon DynamoDB Standard-Infrequent access table class

One of the interesting releases at re:Invent 2021 was Amazon DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA). A new table class for DynamoDB that reduces storage costs by 60% over the existing DynamoDB Standard table class, while still providing the same performance, reliability, and scalability.

Amazon DynamoDB Standard-Infrequent access table class

Currently, many customers are moving their rarely used data between DynamoDB and Amazon Simple Storage Service (Amazon S3). This means that customers are developing a data migration process and rather complex applications that need to support two completely different APIs - one for DynamoDB and the other for Amazon S3. The new DynamoDB Standard-IA table class solves this problem. It is designed to meet the requirements of customers who need a cost-effective solution for storing rarely used data in DynamoDB without changing the code of their applications. Using a new class of tables, you get high read and write performance from DynamoDB and use all the same APIs.

With the DynamoDB Standard-IA table class, you can save up to 60% in storage costs over the standard DynamoDB table class. However, DynamoDB reads and writes for the new class have a higher cost than standard tables. Therefore, it is important to consider the use cases before changing your tables to a new class.

DynamoDB Standard-IA is a great solution if you need to store terabytes of data for several years, when the data needs to be highly available but not frequently accessed. An example would be a social networking application where end users rarely access their old posts. However, these messages are stored because if someone scrolls through a profile to see a 10-year-old photo, they will want it as quickly as if it was a newer post.

Ecommerce websites are another good use case. These sites may have many products that are not frequently accessed, but the site admins still want them to be available in the store for purchase. It is also a good storage solution for previous customer orders. DynamoDB Standard-IA table offers the ability to maintain order history at a lower cost.

You can change the class of an existing table to Standard-IA or Standard twice every 30 days without any performance or availability loss. All DynamoDB functionality is also available when using the new class table. Alternatively, you can also create a new table with the DynamoDB Standard-IA class.

DynamoDB Standard-IA is available in all AWS Regions except China and AWS GovCloud. DynamoDB Standard-IA storage, for example, in the US East (N. Virginia) is $0.10 per GB (60% less than DynamoDB Standard), and reads and writes will be 25% higher.

For more information on this feature and its pricing, see the DynamoDB Standard-IA page and the DynamoDB Pricing page.

Amazon DevOps Guru for RDS to Detect, Diagnose, and Resolve Amazon Aurora-Related Issues using ML

Amazon DevOps Guru for RDS

AWS yesterday announced Amazon DevOps Guru for RDS, a new feature of Amazon DevOps Guru. This allows developers to easily detect, diagnose, and resolve performance and operational issues in Amazon Aurora.

Hundreds of thousands of customers currently use Amazon Aurora because of its high availability, scalability, and reliability. But as applications grow in size and complexity, it becomes more difficult for these customers to quickly identify and fix operational and performance issues. Developers will now have enough information to determine the exact cause of a database performance problem.

Amazon DevOps Guru for RDS to Detect

DevOps Guru for RDS uses machine learning to automatically identify and analyze a wide range of database performance issues, such as overuse of host resources, database bottlenecks, or misbehaving SQL queries. He also recommends solutions to fix the problems found. You don't need to be a database or machine learning expert to use this feature.

DevOps Guru for RDS

When an issue is detected, the DevOps Guru for RDS displays the results in the DevOps Guru console and sends notifications using Amazon EventBridge or Amazon Simple Notification Service (SNS). This allows developers to automatically manage issues and take action in real time.

You can use DevOps Guru for RDS at no additional cost, up to the current cost that DevOps Guru charges for RDS resources.

DevOps Guru for RDS is available in all regions where DevOps Guru is available: US East (Ohio and Northern Virginia), US West (Oregon), Asia Pacific (Singapore, Sydney, and Tokyo), Europe (Frankfurt, Ireland, and Stockholm).

Do you have any questions about new products and updates? The best way to figure this out is to discuss it with colleagues over a cup of coffee or connect to the second stream of AWS re:Invent 2021 results, where you can safely ask your questions to AWS architects who will be happy to answer them.

Discuss

Read also:

AWS re:Invent 2021: Keynotes
02 December 2021, Thursday
AWS re:Invent 2021: Keynotes
What is a dashboard
25 November 2021, Thursday
What is a dashboard
What is Jira Software and How To Work With It
24 November 2021, Wednesday
What is Jira Software and How To Work With It
Add a comment
Comments (0)
Comment
Partners