Extremely simple and fast word2vec implementation with Negative Sampling + Sub-sampling
-
Updated
Jan 21, 2021 - Python
Extremely simple and fast word2vec implementation with Negative Sampling + Sub-sampling
Implementing Facebook's FastText with java
Using pre trained word embeddings (Fasttext, Word2Vec)
Dict2vec is a framework to learn word embeddings using lexical dictionaries.
Web服务:使用腾讯 800 万词向量模型和 spotify annoy 引擎得到相似关键词
Cross-Lingual Alignment of Contextual Word Embeddings
PyTorch implementation of the Word2Vec (Skip-Gram Model) and visualizing the trained embeddings using TSNE
Storage and retrieval of Word Embeddings in various databases
This repository contains source code to binarize any real-value word embeddings into binary vectors.
Dutch word embeddings, trained on a large collection of Dutch social media messages and news/blog/forum posts.
Aspect-Based Sentiment Analysis
Extending conceptual thinking with semantic embeddings.
Repository for the experiments described in the paper named "DeepSentiPers: Novel Deep Learning Models Trained Over Proposed Augmented Persian Sentiment Corpus"
flairR: Bring Amazing Flair NLP to R
TwitPersonality: Computing Personality Traits from Tweets using Word Embeddings and Supervised Learning
📖 📚 📰 Workshop that demonstrates using and analyzing text in R.
This project aims to implement the Transformer Encoder blocks using various Positional Encoding methods.
RiverText is a framework that standardizes the Incremental Word Embeddings proposed in the state-of-art. Please feel welcome to open an issue in case you have any questions or a pull request if you want to contribute to the project!
Add a description, image, and links to the wordembeddings topic page so that developers can more easily learn about it.
To associate your repository with the wordembeddings topic, visit your repo's landing page and select "manage topics."