Hello world. This is the monthly Natural Language Processing(NLP) newsletter covering everything related to NLP at AWS. This is our fourth newsletter on Dev.to. If you missed our earlier episode, here are Ep01, Ep02 and Ep03. Feel free to leave comments, share it on your social network to celebrate this new launch with us!
Service updates about NLP on AWS
Announcing conversational AI partner solutions
Availability of AWS conversational AI partner solutions, which enables enterprises to implement high-quality, highly effective chatbot, virtual assistant, and Interactive Voice Response (IVR) solutions through the domain expertise of AWS Partners and AWS AI and machine learning (ML) services.Introducing spelling support in Amazon Lex
You can configure your Amazon Lex bots to capture the spelling (e.g., “Z A C”) or the phonetic description (e.g., Z as in Zebra, A as in Apple, C as in Cat) for the first name, last name, email address, alphanumeric and UK postal code built-in slot types. Callers can use the spelling support to provide names with difficult or alternative spellings (e.g., “Chris” vs. “Kris”). They can disambiguate confusable letters such as “N” vs. “M” by using phonetic descriptions (e.g., to spell the name, Min: “M as in Mary, I as in Idea, N as in Nancy”). The spelling capability expands on the built-in slot types so you can simplify the dialog management and improve the end-user experience.Amazon Kendra releases AWS Single Sign-On integration for secure search
Organizations can now use AWS Single Sign-On (AWS SSO) identity store with Amazon Kendra for user context filtering. User context filtering allows organizations to only show content that a user has access to. Amazon Kendra can fetch access levels of groups and users from an AWS SSO identity store and use this information to only return documents a given user has access to. Amazon Kendra indexes the document access control information and at search time, this is compared with the user and group information retrieved from the AWS SSO to return filtered search results that the user has access to. AWS SSO supports identity providers such as Azure AD, CyberArk, Okta etc.Amazon Translate Now Extends Support for Active Custom Translation to all language pair combinations
Announcing the general availability of Active Custom Translation (ACT) to customize your translation between any currently supported languages. For example, you can now use ACT between German and French.
ACT produces custom-translated output without the need to build and maintain a custom translation model. With other custom translation products, customers can spend a lot of time and money on overhead expenses to manage and maintain various instances of both the customer data and the custom translation model for each language pair. With ACT, Amazon Translate will use your preferred translation examples as parallel data (PD) to customize the translation output. You can update your PD as often as needed to improve the translation quality without having to worry about retraining or managing custom translation models.Amazon Translate now supports AWS KMS Encryption
You can use your own encryption keys from the AWS Key Management Service (KMS) to encrypt data placed in your S3 bucket. Up until now, Amazon Translate used Amazon S3-SSE to encrypt your data. AWS KMS makes it easy for you to create and manage keys, while controlling the use of encryption across a wide range of AWS services and in your applications. AWS KMS is a secure and resilient service that uses FIPS 140-2 validated hardware security modules to protect your keys. AWS KMS is integrated with AWS CloudTrail to provide you with logs of all key usage to help meet your regulatory and compliance needs. The feature can be configured via the AWS Management console or SDK and supports Amazon Translate’s asynchronous batch translation jobs.Inferentia got updates that are important for NLP performance
Neuron 1.16.0 is a release that requires your attention. You must update to the latest Neuron Driver (aws-neuron-dkms
version 2.1 or newer) for successful installation or upgrade.
This release introduces Neuron Runtime 2.x, improves performance (Up to 20% additional throughput and up to 25% lower latency), upgrades PyTorch Neuron to PyTorch 1.9.1, adds support for new APIs (torch.neuron.DataParallel()
andtorch_neuron.is_available()
), adds new features and capabilities (compiler--fast-math
option for better fine-tuning of accuracy/performance and MXNet FlexEG feature), improves tools, adds support for additional operators, and reduces model loading times. It also simplifies Neuron installation steps, and improves the user experience of container creation and deployment. In addition it includes bug fixes, new application notes, updated tutorials, and announcements of software deprecation and maintenance.
NLP on SageMaker
Bring your own data to classify news with Amazon SageMaker and Hugging Face
Hugging Face is a popular open-source library for NLP, with over 7,000 pretrained models in more than 164 languages with support for different frameworks. AWS and Hugging Face have a partnership that allows a seamless integration through Amazon SageMaker with a set of AWS Deep Learning Containers (DLCs) for training and inference in PyTorch or TensorFlow, and Hugging Face estimators and predictors for the SageMaker Python SDK. These capabilities in SageMaker help developers and data scientists get started with NLP on AWS more easily. Processing texts with transformers in deep learning frameworks such as PyTorch is typically a complex and time-consuming task for data scientists, often leading to frustration and lack of efficiency when developing NLP projects. Therefore, the rise of AI communities like Hugging Face, combined with the power of ML services in the cloud like SageMaker, accelerate and simplify the development of these text processing tasks.
In this post, we show you how to bring your own data for a text classification task by fine-tuning and deploying state-of-the-art models with SageMaker, the Hugging Face containers, and the SageMaker Python SDK.Amazon SageMaker Asynchronous Inference with Hugging Face Model
Amazon SageMaker Asynchronous Inference is a new capability in SageMaker that queues incoming requests and processes them asynchronously. SageMaker currently offers two inference options for customers to deploy machine learning models: 1) a real-time option for low-latency workloads 2) Batch transform, an offline option to process inference requests on batches of data available upfront. Real-time inference is suited for workloads with payload sizes of less than 6 MB and require inference requests to be processed within 60 seconds. Batch transform is suitable for offline inference on batches of data.
This notebook provides an introduction on how to use the SageMaker Asynchronous inference capability with Hugging Face models. This notebook will cover the steps required to create an Asynchronous inference endpoint and test it with some sample requests.
AWS Blog posts, papers, and more
Monitor operational metrics for your Amazon Lex chatbot
In this post, we look at deploying an analytics dashboard solution for your Amazon Lex bot. The solution uses your Amazon Lex bot conversation logs to automatically generate metrics and visualizations. It creates an Amazon CloudWatch dashboard where you can track your chatbot performance, trends, and engagement insights.Generate high-quality meeting notes using Amazon Transcribe and Amazon Comprehend
In this post, we demonstrate a solution that uses the Amazon Chime SDK, Amazon Transcribe, Amazon Comprehend, and AWS Step Functions to record, process, and generate meeting artifacts. Our proposed solution is based on a Step Functions workflow that starts when the meeting bot stores the recorded file in an Amazon Simple Storage Service (Amazon S3) bucket. The workflow contains steps that transcribe and derive insights from the meeting recording. Lastly, it compiles the data into an email template and sends it to meeting attendees. You can easily adapt this workflow for different use cases, such as web conferencing solutions.Natural Language Processing course on Machine Learning University
This course is designed to help you get started with Natural Language Processing (NLP) and learn how to use NLP in various use cases. It will cover topics such as text processing, regression and tree-based models, hyperparameter tuning, recurrent neural networks, attention mechanism, and transformers.Hugging Face Course Part 2 launched
Part 2 of the Hugging Face Course was released on November 15th! Part 1 focused on teaching you how to use a pretrained model, fine-tune it on a text classification task then upload the result to the Model Hub. Part 2 will focus on all the other common NLP tasks: token classification, language modeling (causal and masked), translation, summarization and question answering. It will also take a deeper dive in the whole Hugging Face ecosystem, in particular 🤗 Datasets and 🤗 Tokenizers.
Upcoming events
AWS re:Invent
Mon, Nov 29, 2021 – Fri, Dec 3, 2021
AWS re:Invent is a learning conference hosted by Amazon Web Services (AWS) for the global cloud computing community. The in person event features keynote announcements, training and certification opportunities, access to 1,500+ technical sessions, the Expo, after-hours events, and so much more.
The following blog posts can help you get the most out of re:Invent this year:
- Your guide to AI and ML at AWS re:Invent 2021
- AWS AI/ML Community attendee guides to AWS re:Invent 2021
Miscellaneous
She Talks Tech: from WeAreTechWomen podcast - What does it take for computers to understand human language?
This episode is the 4th of an AWS special series of the “she talks tech” podcast. The objective of these podcasts is to demonstrate how Cloud technology is helping transform many industries like Retail, Financial Services or even Sports. But we also want to hear from the women behind these stories who are enabling these transformations to understand what they do day to day and how they got into working in technology.
In this episode, Anna, Artificial Intelligence Specialist Solutions Architect, AWS, and Mia, Machine Learning Specialist Solutions Architect, AWS, will share with you their story about Natural Language Processing using Artificial intelligence.Data Science on AWS Meetup
Mia Chang talked about: NLP inference optimizations on Amazon SageMaker. Watch the recording
Visualize and understand NLP models with the Language Interpretability Tool
The Language Interpretability Tool (LIT) is for researchers and practitioners looking to understand NLP model behavior through a visual, interactive, and extensible tool.
Use LIT to ask and answer questions like:
What kind of examples does my model perform poorly on?
Why did my model make this prediction? Can it attribute it to adversarial behavior, or undesirable priors from the training set?
Does my model behave consistently if I change things like textual style, verb tense, or pronoun gender?
LIT contains many built-in capabilities but is also customizable, with the ability to add custom interpretability techniques, metrics calculations, counterfactual generators, visualizations, and more.State of the Art Spark NLP - Production-grade, fast & trainable implementation of latest NLP research
John Snow Labs’ Spark NLP is an open source text processing library for Python, Java, and Scala.
It provides production-grade, scalable, and trainable versions of the latest research in
natural language processing.
Spark NLP is the most widely used NLP library in the Enterprise, by far (Source: gradientflow.com) and is highly adopted by healthcare and pharmaceuticals (59%).
Stay in touch with NLP on AWS
Our contact: aws-nlp@amazon.com
Email us about (1) your awesome project about NLP on AWS, (2) let us know which post in the newsletter helped your NLP journey, (3) other things that you want us to post on the newsletter. Talk to you soon.
Top comments (0)