The root of a word may or may not be an existing word in the language. For example, "mov" is the root of "movie", "emot" is the root of "emotion".

Contents

Deep learning NLP 101 Directory Blue
Deep learning NLP 101

I recently completed a comprehensive deep learning NLP course from Stanford.

This course is a detailed introduction to cutting edge deep learning research applied to NLP. The course covers word vector representation, window-based neural networks, recurrent neural networks, long-short-term memory models, convolutional neural networks, and some recent models using a memory component. From the programming side, I learned to apply, train, debug, visualize and create my own neural network models.

Note: Access to course lectures and programming homework is in this repository.

Vector representation

In traditional NLP, words are treated as discrete characters, which are then represented as one-hot vectors. The problem with words - discrete symbols - lack of definition of similarity for one-hot vectors. Therefore, the alternative is to learn how to code the similarity into the vectors themselves.

Vector notation is a method of representing strings as vectors with values. A dense vector is constructed for each word so that words found in similar contexts have similar vectors. Vector representation is considered the starting point for most NLP tasks and makes deep learning effective on small datasets. The vector representation techniques Word2vec and GloVe, created by Google (Mikolov) Stanford (Pennington, Socher, Manning) respectively, are popular and often used for NLP problems. Let's take a look at these techniques: continue reading here doctranslator.