NLP is one of the most talked-about technologies nowadays. All the Software giants have had their fair share of expertise in this technology. Google has BERT and TransformerXL, Facebook has RoBERTa and XLM/mBERT and Microsoft after acquiring OpenAi has GPT-2 and GPT-3. LinkedIn now has launched its own NLP Framework known as DeText.
NLP performs a lot of tasks. Some of the most common tasks are:-
- Classification
- Document Ranking
- Auto Completion
- Named Entity Recognition
- Machine Translation
These tasks are shown in the diagram given below
DeText can support ranking, classification, and sequence completion out of the tasks shown in the diagram above.
DeText logo is inspired by the sloth: "Relax like a sloth, and let DeText do the understanding for you" is the motto.
What is it?
DeText is a Deep Text understanding framework for NLP related ranking, classification, and language generation tasks. It leverages semantic matching using deep neural networks to understand member intents in search and recommender systems. As a general NLP framework, currently DeText can be applied to many tasks, including search & recommendation ranking, multi-class classification and query understanding tasks.
Features
-
Natural language understanding powered by state-of-the-art deep neural networks
- Automatic feature extraction with deep models
- End-to-end training
- Interaction modeling between ranking sources and targets
-
A general framework with great flexibility to meet requirement of different production applications.
- Flexible deep model types
- Multiple loss function choices
- User defined source/target fields
- Configurable network structure (layer sizes and #layers)
- Tunable hyperparameters ...
Reaching a good balance between effectiveness and efficiency to meet the industry requirements.
A Brief overview of the Framework(Architecture)
The DeText framework contains multiple components:
Word embedding layer. It converts the sequence of words into a d by n matrix.
CNN/BERT/LSTM for text encoding layer. It takes into the word embedding matrix as input, and maps the text data into a fixed length embedding. It is worth noting that we adopt the representation based methods over the interaction based methods. The main reason is the computational complexity: The time complexity of interaction based methods is at least O(mnd), which is one order higher than the representation based methods max(O(md), O(nd).
Interaction layer. It generates deep features based on the text embeddings. Many options are provided, such as concatenation, cosine similarity, etc.
Wide & Deep Feature Processing. We combine the traditional features with the interaction features (deep features) in a wide & deep fashion.
MLP layer. The MLP layer is to combine wide features and deep features.
To know more about the NLP framework visit the Github repo. LinkedIn is applying it for ranking search queries in people search and job search.
Top comments (2)
"...and Microsoft after acquiring OpenAi has GPT-2 and GPT-3"
Does Microsoft own OpenAI or are they just partners?
No Microsoft doesn't own OpenAi but has a huge sum invested in it. So you can call them as partners.